You are not logged in.
Hi there,
after upgrading my home server to the latest kernel 3.13 I've found out, that my RAID 5 got degraded. One of the drives has been kicked out, but I don't know why. The drive seems okay, I've also done a SMART short test, completed without any errors. The only suspicious looking error message, when upgrading to Linux 3.13 was:
ERROR: Module 'hci_vhci' has devname (vhci) but lacks major and minor information. Ignoring.
This is mdstat output:
[tolga@Ragnarok ~]$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md127 : active raid5 sda1[0] sdc1[3] sdb1[1]
5860145664 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/3] [UUU_]
unused devices: <none>
smartctl:
[tolga@Ragnarok ~]$ sudo smartctl -a /dev/sdd
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.13.4-1-ARCH] (local build)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Red (AF)
Device Model: WDC WD20EFRX-68AX9N0
Serial Number: [removed]
LU WWN Device Id: 5 0014ee 2b2cd537a
Firmware Version: 80.00A80
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: ACS-2 (minor revision not indicated)
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is: Fri Feb 21 22:26:30 2014 CET
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x00) Offline data collection activity
was never started.
Auto Offline Data Collection: Disabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (26580) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 268) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x70bd) SCT Status supported.
SCT Error Recovery Control supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0
3 Spin_Up_Time 0x0027 164 163 021 Pre-fail Always - 6766
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 273
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 098 098 000 Old_age Always - 1954
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 273
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 6
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 266
194 Temperature_Celsius 0x0022 115 104 000 Old_age Always - 35
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0
SMART Error Log Version: 1
ATA Error Count: 306 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 306 occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 02 00 00 00 a0 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 10 02 00 00 00 a0 08 22:17:38.065 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.065 IDENTIFY DEVICE
ef 03 46 00 00 00 a0 08 22:17:38.064 SET FEATURES [Set transfer mode]
ef 10 02 00 00 00 a0 08 22:17:38.064 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.064 IDENTIFY DEVICE
Error 305 occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 a0 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 a0 08 22:17:38.064 SET FEATURES [Set transfer mode]
ef 10 02 00 00 00 a0 08 22:17:38.064 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.064 IDENTIFY DEVICE
ef 10 02 00 00 00 a0 08 22:17:38.063 SET FEATURES [Enable SATA feature]
Error 304 occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 02 00 00 00 a0 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 10 02 00 00 00 a0 08 22:17:38.064 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.064 IDENTIFY DEVICE
ef 10 02 00 00 00 a0 08 22:17:38.063 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.063 IDENTIFY DEVICE
Error 303 occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 02 00 00 00 a0 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 10 02 00 00 00 a0 08 22:17:38.063 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.063 IDENTIFY DEVICE
ef 03 46 00 00 00 a0 08 22:17:38.063 SET FEATURES [Set transfer mode]
ef 10 02 00 00 00 a0 08 22:17:38.063 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.062 IDENTIFY DEVICE
Error 302 occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
04 61 46 00 00 00 a0 Device Fault; Error: ABRT
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
ef 03 46 00 00 00 a0 08 22:17:38.063 SET FEATURES [Set transfer mode]
ef 10 02 00 00 00 a0 08 22:17:38.063 SET FEATURES [Enable SATA feature]
ec 00 00 00 00 00 a0 08 22:17:38.062 IDENTIFY DEVICE
ef 10 02 00 00 00 a0 08 22:17:38.062 SET FEATURES [Enable SATA feature]
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 1954 -
# 2 Short offline Completed without error 00% 0 -
# 3 Conveyance offline Completed without error 00% 0 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
This is my mdadm configuration:
[tolga@Ragnarok ~]$ cat /etc/mdadm.conf
ARRAY /dev/md/Asura metadata=1.2 UUID=34bab60a:4d640b50:6228c429:0679bb34 name=Ragnarok:Asura
I've checked all partition tables, everything seems ok. "Error 30[x] occurred at disk power-on lifetime: 1706 hours (71 days + 2 hours)" seems to be a one-time event, which happened on 1706 hours (I don't know why; no power loss or something similar). Other than those smartctl errors, everything seems fine. I've also inspected the drive; no suspicious noises or anything else, works like the other 3 drives. Am I safe to simply re-add the drive using "sudo mdadm --manage --re-add /dev/md127 /dev/sdd1" and let it re-sync or should I flag it as failed and then re-add it to the RAID?
I am using 4x 2TB Western Digital Red drives in a RAID 5, which are about 1 year old and they ran perfectly fine until now. The server is currently shut down until this problem is fixed. I currently got a partial backup of my data (most important ones) and will make a full backup, before attempting a repair. At the moment, I'm still able to access all my data, so nothing's wrong there.
So, what do you guys think, what should I do?
Last edited by tolga9009 (2014-09-13 12:48:13)
Offline
Am I safe to simply re-add the drive using "sudo mdadm --manage --re-add /dev/md127 /dev/sdd1" and let it re-sync
That should be safe. In any case, safer than not doing it.
Offline
Thank you brian for the fast reply. I've backed up all my important data and tried the command. It's not working ...
[tolga@Ragnarok ~]$ sudo mdadm --manage --re-add /dev/md127 /dev/sdd1
mdadm: --re-add for /dev/sdd1 to /dev/md127 is not possible
[tolga@Ragnarok ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 1.8T 0 disk
ââsda1 8:1 0 1.8T 0 part
ââmd127 9:127 0 5.5T 0 raid5 /media/Asura
sdb 8:16 0 1.8T 0 disk
ââsdb1 8:17 0 1.8T 0 part
ââmd127 9:127 0 5.5T 0 raid5 /media/Asura
sdc 8:32 0 1.8T 0 disk
ââsdc1 8:33 0 1.8T 0 part
ââmd127 9:127 0 5.5T 0 raid5 /media/Asura
sdd 8:48 0 1.8T 0 disk
ââsdd1 8:49 0 1.8T 0 part
sde 8:64 0 59.6G 0 disk
ââsde1 8:65 0 512M 0 part /boot/efi
ââsde2 8:66 0 4G 0 part [SWAP]
ââsde3 8:67 0 54.6G 0 part /
ââsde4 8:68 0 512M 0 part /boot
Out of curiosity, I've compared "mdadm -E" of the corrupted and a healthy drive. Here's the output:
[tolga@Ragnarok ~]$ diff -u sdc sdd
--- sdc 2014-02-21 23:28:51.051674496 +0100
+++ sdd 2014-02-21 23:28:55.911816816 +0100
@@ -1,4 +1,4 @@
-/dev/sdc1:
+/dev/sdd1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
@@ -14,15 +14,15 @@
Data Offset : 262144 sectors
Super Offset : 8 sectors
Unused Space : before=262064 sectors, after=1167 sectors
- State : clean
- Device UUID : 4ce2ba99:645b1cc6:60c23336:c4428e2f
+ State : active
+ Device UUID : 4aeef598:64ff6631:826f445e:dbf77ab5
- Update Time : Fri Feb 21 23:18:20 2014
- Checksum : a6c42392 - correct
- Events : 16736
+ Update Time : Sun Jan 12 06:40:56 2014
+ Checksum : bf106b2a - correct
+ Events : 7295
Layout : left-symmetric
Chunk Size : 512K
- Device Role : Active device 2
- Array State : AAA. ('A' == active, '.' == missing, 'R' == replacing)
+ Device Role : Active device 3
+ Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)
So, I guess my only way to fix this is remove the faulty drive from the RAID, zero out the superblock and then re-add it as a new drive. Or is there any other way to fix this?
//Edit: I've used "mdadm --detail /dev/md127" and found out, that the faulty drive wasn't even listed anymore. So instead of using "re-add", I simply added it as a new drive and it's resyncing now. In about 220mins, I'll know more ! Is there a way to check for corruption after syncing the drives?
//Edit: ^ this worked. My drive probably didn't got kicked after the 3.13 upgrade, but I've simply noticed it then. The drive seems to be kicked after ~1700 hours for some unknown reason - I've now disconnected and reconnected all drives to opt out any wiring issues. Since the drive was out of sync, simply re-adding it didn't work. I had to manually add it to the array again and this caused a resync, which took around 3,5 hours. I think that's okay for a 4x 2TB RAID 5 array. Everything is working fine again, no data corruption, nothing. I'll mark it as solved.
Last edited by tolga9009 (2014-09-13 12:59:22)
Offline