You are not logged in.
Maybe someone has missed this issue so far, here is an explanation and a solution: http://www.ngohq.com/news/19805-critica … -hdds.html
Achwiki also provides a solution: https://wiki.archlinux.org/index.php/Ad … Green_HDDs
Offline
Old news. We already know that for about two years. Keep in mind only certain batches are affected. I have three WD Green 1 TB HDs myself, none of them suffer from that issue.
Last edited by .:B:. (2011-04-22 09:12:43)
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Achwiki also provides a solution: https://wiki.archlinux.org/index.php/Ad … Green_HDDs
At my case 'hdparm -S 242 /dev/sdX' definitely doesn't prevent to load heads during an hour.
"I exist" is the best myth I know..
Offline
Old news. We already know that for about two years. Keep in mind only certain batches are affected. I have three WD Green 1 TB HDs myself, none of them suffer from that issue.
I've bought a WD Green 2TB for backup purposes recently, and I didn't know about this issue until I came across with the mentioned wiki page. Since my HDD is sent to sleep from rc.local I am not really affected, but I thought it may come in handy for some people. Well, as it seems I was wrong...
Offline
Well, as it seems I was wrong...
I don't feel so. As for me, I have "discovered" this WD problem from your post, thanks!
"I exist" is the best myth I know..
Offline
Well, as it seems I was wrong...
You weren't wrong. It's never a bad idea to bring such things back to the attention of Linux users, since obviously nobody is going to dig through the forums to find a post that's two years old. People go look for reliability issues when they're in the market for a new hard drive, but not for this kind of stuff.
I find it very sad to see this issue still persists to this day - since the list your article has mentions also 1,5 TB and 2 TB drives. Way to go Western Digital.
Last edited by .:B:. (2011-04-22 11:06:59)
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
It seems, nobody has presented real facts proving correlation between high loading count and HDD lifetime shortening (I mean the context of these concrete WD's HDD models). Have I missed somrthing?
"I exist" is the best myth I know..
Offline
Have you read the thread I linked to?
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
Have you read the thread I linked to?
I have. But have not found any statistics information, i.e. facts Would high load count itself (I don't take in mind cases of audible clicks) inevitably results in HDD fault, we would know about these massive faults during years.
I'm interested in the theme as far as all three HDD I have are WD GREEN 1T ones (with rather high loads per hour count). But have found assumptions only.
Last edited by student975 (2011-04-22 12:41:17)
"I exist" is the best myth I know..
Offline
With 300k reportedly being the lifetime spec, I guess it *is* an issue. Not that WD seems to care about Linux though...
Last edited by .:B:. (2011-04-22 14:39:34)
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
I guess...
I can guess the same. We guess only. Where are those massive fault reports from users with 1000K load counts?
"I exist" is the best myth I know..
Offline
They clearly state that they only support windows (on the HD front) .... quite strange since WD has other products based on linux which they actively support.
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
I've been looking at getting one of these drives since last summer since they come up on sale at Newegg often (hmmm), but most models seem to have user reviews from an unusually large number of people who get them DOA, and others reporting ridiculously short lifespans (I'm currently using a 150G WD drive I bought about 7 years ago).
I'm not sure why so many are DOA, but reports of them dying after a few months would seem to indicate the load counts take their toll, although far from scientific...
Last edited by Don Coyote (2011-04-22 18:32:42)
Offline
Achwiki also provides a solution: https://wiki.archlinux.org/index.php/Ad … Green_HDDs
Hey! Glad someone found some value in what I wrote
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
.:B:. wrote:I guess...
I can guess the same. We guess only. Where are those massive fault reports from users with 1000K load counts?
Assuming they are aware of the problem at all, and that they did not get it RMA'ed under guarantee? I think the drives suffering from that problem die well within its period of guarantee, and thus end up being replaced.
Yes, that's a lot of assumptions. But most users will just carry on instead of digging up the underlying problem.
They clearly state that they only support windows (on the HD front) .... quite strange since WD has other products based on linux which they actively support.
I know they lost a customer here from the moment I heard it .
Last edited by .:B:. (2011-04-22 20:21:07)
Got Leenucks? :: Arch: Power in simplicity :: Get Counted! Registered Linux User #392717 :: Blog thingy
Offline
I know they lost a customer here from the moment I heard it
.
Last time I checked (maybe I didn't check well enough) they had this [1]. At least I found this or some disclaimer clearly leaving linux out of the supported OSes. Meanwhile and maybe because of the squeaky wheel effect they now seem to have this [2].
[1] http://wdc.custhelp.com/app/answers/detail/a_id/987/
[2] http://wdc.custhelp.com/app/answers/detail/a_id/5357
R00KIE
Tm90aGluZyB0byBzZWUgaGVyZSwgbW92ZSBhbG9uZy4K
Offline
Seems I have this on my WD10EARS-003BB1:
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 128 123 021 Pre-fail Always - 6566
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 167
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 847
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 165
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 32
193 Load_Cycle_Count 0x0032 181 181 000 Old_age Always - 59546
Load cycle count of 59546? Is that correct? For only 167 boots?
After reading this thread I added the hdparm fix in rc.local and hope for the best.
Offline
Seems I have this on my WD10EARS-003BB1:
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 128 123 021 Pre-fail Always - 6566 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 167 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 847 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 165 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 32 193 Load_Cycle_Count 0x0032 181 181 000 Old_age Always - 59546
Load cycle count of 59546? Is that correct? For only 167 boots?
After reading this thread I added the hdparm fix in rc.local and hope for the best.
I have approximately the same ratio. I mean 193 / 9: start/stop counting isn't such interesting as power on hours is, I think. That hdparm setting doesn't change anything at my use cases.
Last edited by student975 (2011-04-25 11:48:28)
"I exist" is the best myth I know..
Offline
Seems that sudo hdparm -S 242 doesn't change anything here either. Load count still rising. And disabling parking altogether with sudo hdparm -B 254 /dev/sda gives:
/dev/sda:
setting Advanced Power Management level to 0xfe (254)
HDIO_DRIVE_CMD failed: Input/output error
APM_level = not supported
Some recommends setting SATA to IDE in BIOS (or something the like) but I have no settings for this in my BIOS.
Offline
@student975; This link http://www.rrfx.net/2010/03/western-dig … rives.html confirms your pont of view that load cycle count isn't too much of a worry. That guy had like 2.5 times as much as life expectancy for his disks.
Offline
@student975; This link http://www.rrfx.net/2010/03/western-dig … rives.html
Thanks for the ref! Indeed there is valuable investigation. I have stumbled over this "This results in less friction on the disk platters.." phrase (a head doesn't touch a plate, it is flying above a plate), but it is beyond the scope.
... confirms your pont of view that load cycle count isn't too much of a worry. That guy had like 2.5 times as much as life expectancy for his disks.
I'd want to clarify, I just permit this point of view. Moreover, having high load/hours ratio, I have never heard load click (and, having almost silent WS and NAS, I fight against any noise thoroughly and, as a result, try to hear all can be heard dealing with hardware ).
"I exist" is the best myth I know..
Offline
Here's a link from WD support:
http://wdc.custhelp.com/app/answers/detail/a_id/5357
where it says that disks are good for 1 million load cycles. Not bad but I'm up to 60.000 just after a couple of months and only 170 boot ups. Anyone tried the official WD wdidle3 utiity on a working disk?
Offline
Some update;
9 Power_On_Hours 0x0032 094 094 000 Old_age Always - 4666
193 Load_Cycle_Count 0x0032 043 043 000 Old_age Always - 473482
So had this WD disk for about a year now still OK, but the power on hours seem a bit too high. And I used up half its life span on load cycles, it seems. I'm using hdparm -S 242 /dev/sda in rc.local with possibly no effect.
Offline
Technically this is a necrobump but this info may make you feel better.
I purchased two WDC WD10EARS-22Y in January 2011 and they show:
$ sudo smartctl /dev/sdb -a | grep -e '^ 9' -e '^193'
9 Power_On_Hours 0x0032 087 087 000 Old_age Always - 10005
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 1011434
$ sudo smartctl /dev/sdc -a | grep -e '^ 9' -e '^193'
9 Power_On_Hours 0x0032 087 087 000 Old_age Always - 10005
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 1008385
Initially I had my entire system on these drives including the root fs though in June 2011 I switched my root fs to another drive and have been using the WD10EARS-22Ys for my home partition only. They are well over 90% capacity so I will be upgrading them soon and relegating them to a cheap NAS box. I might have to take more notice of this issue on the next set of drives.
Offline
Well, necrobumping isn't ok, but a follow up on statistics in this case, might be. The concern is WD disks over time and life span.
Unfortunately my other big disk, a green Samsung, doesn't show post 193 Load cycle count. So no comparison can be made there. But I think my next disks will be Samsung, but as you said, with some research first.
Offline