You are not logged in.
(This is kind of regarding an earlier mdadm problem, but i should not be necessary to read about my previous problem. Anyway, here is my previous post)
I'm trying to repair a kind of messed up mdadm raid5 array. There are 5 out of 6 drives running, but 1 of the 5 disks (/dev/sde) got problems with some sectors that cannot be written to.
Buffer I/O error on device sde1, logical block 983126736
Buffer I/O error on device sde1, logical block 983126737
Buffer I/O error on device sde1, logical block 983126738
Buffer I/O error on device sde1, logical block 983126739
Buffer I/O error on device sde1, logical block 983126740
Buffer I/O error on device sde1, logical block 983126741
Buffer I/O error on device sde1, logical block 983126742
Buffer I/O error on device sde1, logical block 983126743
Buffer I/O error on device sde1, logical block 983126744
Buffer I/O error on device sde1, logical block 983126745
The first step towards getting this raid up and running with 6/6 healthy drives would be to make all of the 5 drives containing data to be writeable. To do that i would have to copy the content of /dev/sde1 to /dev/sda1. /dev/sda1 is a new partition with exactly the same amount of sectors.
sudo dd if=/dev/sde1 of=/dev/sda1 bs=4M conv=noerror
It ran successfully. The data at /dev/sda1 should be identical to /dev/sde1
Current partition layout of the involved disks:
# clvn@enigma: sudo fdisk -l /dev/sd{a,e}
Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000a3168
Device Boot Start End Blocks Id System
/dev/sda1 2048 2930274049 1465136001 fd Linux raid autodetect
Disk /dev/sde: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x7b48ff68
Device Boot Start End Blocks Id System
/dev/sde1 63 2930272064 1465136001 fd Linux raid autodetect
(identical number of sectors, only different starting points.)
# clvn@enigma: sudo mdadm --examine --verbose /dev/sde1
/dev/sde1:
Magic : a92b4efc
Version : 0.90.00
UUID : e88a3b09:b4cd0d6e:dc342cdc:19e2ab4a
Creation Time : Fri Oct 2 03:43:23 2009
Raid Level : raid5
Used Dev Size : 1465135936 (1397.26 GiB 1500.30 GB)
Array Size : 7325679680 (6986.31 GiB 7501.50 GB)
Raid Devices : 6
Total Devices : 6
Preferred Minor : 0
Update Time : Sun Feb 20 10:27:02 2011
State : clean
Active Devices : 5
Working Devices : 6
Failed Devices : 1
Spare Devices : 1
Checksum : 2c49d439 - correct
Events : 1735369
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 4 8 65 4 active sync /dev/sde1
0 0 8 17 0 active sync /dev/sdb1
1 1 0 0 1 faulty removed
2 2 8 97 2 active sync /dev/sdg1
3 3 8 49 3 active sync /dev/sdd1
4 4 8 65 4 active sync /dev/sde1
5 5 8 33 5 active sync /dev/sdc1
6 6 8 81 6 spare /dev/sdf1
# clvn@enigma: sudo mdadm --examine --verbose /dev/sda1
mdadm: No md superblock detected on /dev/sda1.
This is what happens when i try to force the raid up with the new partition containing the same data as the damaged partition:
# clvn@enigma: sudo mdadm --assemble -v /dev/md0 --force /dev/sdc1 /dev/sdd1 /dev/sdg1 /dev/sdb1 /dev/sda1
mdadm: looking for devices for /dev/md0
mdadm: no RAID superblock on /dev/sda1
mdadm: /dev/sda1 has no superblock - assembly aborted
So.. why does not /dev/sda1 contain the same superblock as /dev/sde1? Is it possible to create a new superblock based on the superblock on the damaged disk?
(If anything in this post is hard to understand, please tell me so that i can clarify). Thanks a lot, everyone!
Offline
Does "Number" show partitions order in an array?
mdadm --examine --verbose /dev/sde1
[...]
Number Major Minor RaidDevice State
this 4 8 65 4 active sync /dev/sde1
0 0 8 17 0 active sync /dev/sdb1
1 1 0 0 1 faulty removed
2 2 8 97 2 active sync /dev/sdg1
3 3 8 49 3 active sync /dev/sdd1
4 4 8 65 4 active sync /dev/sde1
5 5 8 33 5 active sync /dev/sdc1
6 6 8 81 6 spare /dev/sdf1
And could this information be used to force creation of superblocks when combined with create-mode?
(Keep in mind that /dev/sda1 = dd'ed /dev/sde1)
mdadm --create /dev/md0 --verbose --level=5 --raid-devices=6 /dev/sdb1 missing /dev/sdg1 /dev/sdd1 /dev/sda1 /dev/sdc1
^ would this be correct? Or am i missing something here?
Offline
One last bump. If i dont receive any feedback about the command above this post, i'll run it. Maybe it will solve all my problmems, maybe it will corrupt a lot of data
Offline
Offline
Yes, the drive is bad, but the data is intact. That's why i dd'ed the data to sda1, and want to force the raid up with sda as one of the five out of six drives. Thanks for your imput though!
Offline
Re-partition sda so that it also starts at 63. dd the data over again.
Mark the sde1 as faulty and add sda1. Run a check on the array afterwards.
Offline