You are not logged in.
See issue in discussed on the Ubuntu wiki here here.
"smartctl -a" says the Load_Cycle_Count on my drive is almost at 680,000, when some people are reporting that drives generally only live to 600,000 of these (*reaches for backup drive, haven't backed up /home in a while*).
Looking at "hdparm -I /dev/sda", seems the APM level on my drive was set to 128, when the Ubuntu recommended fix is to set it to 255 (off) or 254 (lowest level).
EDIT: Changed Subject of post so as to not frighten non-laptop users. Maybe I should have posted this in the Laptop section of the forum, though some people (including me) use 2.5" drives in their desktop machines for lower power usage.
Last edited by loserMcloser (2007-10-30 20:07:26)
Offline
Arch scripts don't set the load cycle anywhere, so you're experiencing normal wear.
128 sounds like a default setting.
Offline
Arch scripts don't set the load cycle anywhere, so you're experiencing normal wear.
128 sounds like a default setting.
I believe that not overriding the default settings is one of the complaints being leveled if I'm reading the material correctly:
It appears to be the official policy of Ubuntu (citation/disagreement, anyone?) that by default, Ubuntu should not adjust any power management settings of the harddisk. Unfortunately, this policy has two negative effects: It leaves quite a few people with broken hard drives that would otherwise not be broken, and it quite simply makes people who love Ubuntu feel neglected. This issue has been going on a long time.
The problem appears to be that some manufacturers' defaults are too aggressive and that Ubuntu might cause too many unbuffered disk accesses -- the combination of which can cause over a thousand parks a day on some systems.
From further reading of linked material, the claims and concerns are basically these:
- HDD manufacturers are setting aggressive defaults. Some are saying the balance is struck in favor of longer battery uptime at the expense of HDD wear. The theory goes that the former helps the laptop companies promote the battery uptime as a selling point, and the latter helps HDD manufacturers sell more disks;
- Other OSes (Windows, for example), override the defaults resulting in less aggressive settings, and theoretically, longer disk life when Load_Cycle_Count is compared to stated life expectancy specs for the HDDs.
Edit:
punctuation
Last edited by MrWeatherbee (2007-10-30 21:21:52)
Offline
I got worried about this, when I read the slashdot article. All I set up was laptop-mode-tools with a non modified .conf - my 8 month old laptop has this:
[root@domu karag]# smartctl -a /dev/sda |grep Cycle
193 Load_Cycle_Count 0x0032 090 090 000 Old_age Always - 21000
but I don't know if 21000 is too much?
Offline
Arch scripts don't set the load cycle anywhere
That's fine, I'm (now) doing it in rc.local, just wanted to let other archers know about this.
you're experiencing normal wear
I don't recall when I bought the drive, but the drive label says "Date: 06111", which I take to be either Jan. or Nov. 06, in which case the drive is less than two years old from date of manufacture. It's a Seagate Momentus 7200.1, and the data sheet says "Load/Unload Cycles: 600,000" under "Reliability/Data integrity".
# smartctl -a /dev/sda | grep Load_Cycle
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 678089
I don't consider > manufacture reliability specs in under 2 yrs to be "normal" wear, especially on a drive that Seagate considers reliable enough to offer a 5 yr warranty on it. Though I'm sure if it fails and I tried to return it under warranty, they'd be happy to consider it "normal" wear and not replace the darn thing.
Oh well, drives are cheap and (some) data is precious. I guess I'll shell out for a replacement drive as a precaution, and stick this one in an external enclosure and use it as portable storage.
Offline
I simply don't have APM in the kernel
after 1 1/2yrs I have
sudo smartctl -a /dev/sda |grep Cycle
193 Load_Cycle_Count 0x0032 092 092 000 Old_age Always - 161569
this is Fujitsu sata
Reliability:
Load/Unload cycles 600,000 cycles
Offline
I absolutely knew this was going to happen. The sound was driving me nuts as it was.
I'd had my laptop about three days before I figured out how to disable power management and never looked back.
Cthulhu For President!
Offline
Ayeeeeee.
iphitus@laptop:~/src/sdricoh_cs-0.1.1$ sudo smartctl -a /dev/hda|grep Load_Cycle193 Load_Cycle_Count 0x0032 074 074 000 Old_age Always - 268021
6 month old toshiba drive.
edit: It's just gone up 4 in the time since i posted this.... i feel like im sitting on a time bomb!
edit: looks like it goes up 2 a minute... which gives my drive a 5000hr life.
James
Last edited by iphitus (2007-10-31 03:10:43)
Offline
Ouch:
smartctl --attributes /dev/sda | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 001 001 000 Old_age Always - 731165
(After only a few months of use - it's a ThinkPad T60p I got last spring.) Not sure why my count is so high, but hdparm -B 255 /dev/sda seems to have stopped it. I've put that line in /etc/rc.local...is that the 'right' way to do it?
Offline
I was quite worried when I looked at this earlier in the day.
> sudo smartctl -d ata -a /dev/sda | grep Power_On_Hours
9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 1537
> sudo smartctl -d ata -a /dev/sda | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 025 025 000 Old_age Always - 150401
So by my calculations, thats about 100 per hour of use. The Power_On_Hours seems right (4 months @ ~12 hours a day). Anyway at the current rate I will get ~1 year before my laptop hits its "limit"... I noticed I had forgotten to add noatime to my fstab entries and that reduced the rate by ~1/2, so an extra 6 months. I lucky if my laptop survives that long anyway.
Using hdparm doesn't seem to work on my laptop (Dell Latitude D520, can't remember hard drive manufacturer)
Online
While we're on the subject, has anyone gotten the power management setting to stick coming out of suspend? I have OnResume hdparm blah blah in my common.conf but it doesn't appear to do anything.
Cthulhu For President!
Offline
Should desktop user be worried about this? Also, why is my smartctl -a output don't have Load_Cycle_Count entry?
Offline
Should desktop user be worried about this? Also, why is my smartctl -a output don't have Load_Cycle_Count entry?
Probably not... and your hard drive is probably not supported by smartctl (there are quite a few that are not).
Online
zodmaner wrote:Should desktop user be worried about this? Also, why is my smartctl -a output don't have Load_Cycle_Count entry?
Probably not... and your hard drive is probably not supported by smartctl (there are quite a few that are not).
Well, that's a good news. But my hard drive should, according to hdparm -I, supports smartctl feature. Here's a portion of my smartctl output:
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 055 050 006 Pre-fail Always - 17391407
3 Spin_Up_Time 0x0003 096 096 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 450
5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 078 060 030 Pre-fail Always - 80116759
9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 3239
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 937
194 Temperature_Celsius 0x0022 036 045 000 Old_age Always - 36 (Lifetime Min/Max 0/20)
195 Hardware_ECC_Recovered 0x001a 055 050 000 Old_age Always - 17391407
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0
202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0
As you can see, no Load_Cycle_Count. (Or is it because my hard drive do not support this particular feature of smartctl?)
Last edited by zodmaner (2007-10-31 06:05:48)
Offline
I've done a lot of hdd-testing and benchmarking, including regular fsck'ing and smartctl'ing, for many dozens of different disks. And after all that, I give a shit about most of those counters. Keep an eye on 'smartctl -A /dev/sda | grep Sector' and perhaps the temperature (<45°C), that's it.
Concerning "Load_Cycle_Count": I bought my laptop over eight years ago and the disk had issues for over six years now. I'm not sure how to describe those, let's simply call it "load-cycle/recalibration" issues. That particular counter went nuts, even without any power management... up into the millions, overflowing and up again. It was very audible and therefore easy to double-check the figures, but after all that, the disk is fine to this day without one remapped sector.
1000
Offline
Although Arch wasn't setting any harddrive power management levels, after 10 months of use, I have:
/home/alex % sudo smartctl -a /dev/sda|grep Load
193 Load_Cycle_Count 0x0012 002 002 000 Old_age Always - 981197
I feel a bit like Damocles here. To buy some time, I've *enabled* the following in laptop-mode.conf (before, I was leaving them alone):
CONTROL_HD_POWERMGMT=1
# Power management for HD (hdparm -B values)
BATT_HD_POWERMGMT=255
LM_AC_HD_POWERMGMT=255
NOLM_AC_HD_POWERMGMT=255
And that seems to have stopped the Load_Cycle_Count creep.
Offline
Allan wrote:zodmaner wrote:Should desktop user be worried about this? Also, why is my smartctl -a output don't have Load_Cycle_Count entry?
Probably not... and your hard drive is probably not supported by smartctl (there are quite a few that are not).
<snip>
As you can see, no Load_Cycle_Count. (Or is it because my hard drive do not support this particular feature of smartctl?)
Firstly, I think you would be hard-pressed to find installed HDDs nowadays that didn't support SMART, but I guess there may be some operating relics.
Secondly, the absence or presence of Load_Cycle_Count is relative to the type of disk primarily, as it is a metric of a feature usually only found in mobile-HDD applications, i.e. it indicates the number of times the heads are parked and moved back into position over the platter. The parking technology is an effort to mitigate the heads contacting the platters during impact or jarring, both of which are more likely to occur in mobile PCs.
Though I agree that the topic is a very important one, what I find most interesting about all of this is how old the issue is (compared to the current headline-making stories), and how Ubuntu became the poster-child for the problem when in fact the issue is really distribution / OS agnostic. I suppose part of the jam for Ubuntu is that apparently bug reports were issued, and the community decided perhaps the lack of timely action warranted the publicity.
Personally, I feel the "fix" is simply the knowledge that a user may experience the issue and the knowledge of how to address the issue, especially for a distribution like Arch. On the other hand, I believe the Ubuntu'ers are pressing for an installed default override script, which is fine as well, as long as the user is free to change it to whatever he prefers.
Offline
Why I have two numbers?
[root@myhost ganlu]# sudo smartctl -d ata -a /dev/sda | grep Load_Cycle_Count
193 Load_Cycle_Count 0x0032 072 072 000 Old_age Always - 173607/173436
My laptop is a second-hand ibm r40.
Offline
geez
I just checked, I got 50000+ cycles in two months !
How do I fix that ?
Is this only going on while using battery power ?
UPDATE:
configured laptopmode.conf file as stated above and count did seem to drop. No noticable performance issues too althought I havent put anything really to the test. Havent tried using hdparm yet.
Also come and think of it, 50k cycles in two months of rather high usage (and battery usage that is, too) puts my hdd in the ballpark of about two years. While youd like everything to last for as long as possible, in two years even this baby will probably show signs of age. Oh well.
I hope this gets looked at .. when the operating system contributes to hardware damage its never good publicity.
Last edited by pedepy (2007-11-03 14:18:22)
chupocabra ... psupsuspsu psu psu
Offline
Shouldn't every users be warned about this ?
I only heard about it today from a friend that gave me a link to an ubuntu thread.
However, I checked the values, and they look fine here :
sudo smartctl -d ata -a /dev/sda | egrep 'Load_Cycle|Power_On'
9 Power_On_Seconds 0x0032 090 090 000 Old_age Always - 5078h+41m+46s
193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 29868
Which gives me less than 6 cycles / hours.
But the friend that told me about it already have 41397 cycles in only 348 hours, so 119 cycles / hours.
That looks pretty scary, so I told him to try that hdparm -B 254 or 255 command, we'll see if that helps.
But I think this information is pretty important, and as such, should be spread correctly. Not only hidden in a little thread on the forums, in kernel & hardware issues.
Because if all that information from Ubuntu is corrrect, you can hardly notice that issue until it's too late. Or am I wrong?
pacman roulette : pacman -S $(pacman -Slq | LANG=C sort -R | head -n $((RANDOM % 10)))
Offline
Personally, I feel the "fix" is simply the knowledge that a user may experience the issue and the knowledge of how to address the issue, especially for a distribution like Arch. On the other hand, I believe the Ubuntu'ers are pressing for an installed default override script, which is fine as well, as long as the user is free to change it to whatever he prefers.
So, what do you suggest for spreading this knowledge in Arch?
pacman roulette : pacman -S $(pacman -Slq | LANG=C sort -R | head -n $((RANDOM % 10)))
Offline
See issue in discussed on the Ubuntu wiki here here.
"smartctl -a" says the Load_Cycle_Count on my drive is almost at 680,000, when some people are reporting that drives generally only live to 600,000 of these (*reaches for backup drive, haven't backed up /home in a while*).
Looking at "hdparm -I /dev/sda", seems the APM level on my drive was set to 128, when the Ubuntu recommended fix is to set it to 255 (off) or 254 (lowest level).
EDIT: Changed Subject of post so as to not frighten non-laptop users. Maybe I should have posted this in the Laptop section of the forum, though some people (including me) use 2.5" drives in their desktop machines for lower power usage.
Could you make a bug report for this, maybe even mark it as critical?
I think it's better if the bug is reported by someone who does have the issue.
After that, probably a ML/forum announcement should be made, so that every user of such drives can check with
sudo smartctl -d ata -a /dev/sda | egrep 'Load_Cycle|Power_On'
to see if he's affected.
And in that case, he can manually add hdparm -B 255 /dev/sda in rc.local (or -B 254).
pacman roulette : pacman -S $(pacman -Slq | LANG=C sort -R | head -n $((RANDOM % 10)))
Offline
On my laptop ( 5 months old) the output is:
[root@L3-LR893 ~]# smartctl -a /dev/sda | grep Load
193 Load_Cycle_Count 0x0012 094 094 000 Old_age Always - 64833
Is there anything to worry about?
PS: My laptop is running 12 hours a day daily.
Offline
On my laptop ( 5 months old) the output is:
[root@L3-LR893 ~]# smartctl -a /dev/sda | grep Load 193 Load_Cycle_Count 0x0012 094 094 000 Old_age Always - 64833
Is there anything to worry about?
PS: My laptop is running 12 hours a day daily.
Why didn't you try the command I gave, to also show "power on hours" or "power on seconds" ?
It might be a good idea to check this value is what you expected (~1800) however.
So in your cases, that's ~36 load cycle / hour, so it's a bit big, but maybe not critical. Some people have more than 100.
But the ubuntu bug report recommends less than 15 :
https://launchpad.net/ubuntu/+source/ac … bug/59695/
So just add "hdparm -B 254" to /etc/rc.local , and that's all.
pacman roulette : pacman -S $(pacman -Slq | LANG=C sort -R | head -n $((RANDOM % 10)))
Offline
Thanks
it comes to arnd 36 per hr
but it has stopped after I did "hdparm -B 255 /dev/sda"
Offline