You are not logged in.
Hi,
I had a raid 1 array set up - my first one so I didn't do it correctly - I didn't update mdadm.conf with the array details. I restarted my computer and knew something was wrong when it could not mount the raid array.
Both partitions are being detected correctly but when I try and assemble them with mdadm -A /dev/md0 /dev/sdb1 /dev/sdf1, it reports that both partitions have no superblock!
Not really sure on where to go from here - the array had data on it that I must not lose. Any advice?
Last edited by teepee47 (2011-07-10 08:37:24)
Offline
The first and most important thing is to recover your data then we can play around with the raid array. Because your using a raid level 1 you can mount one of the drives and copy data off it without assembling the raid array.
1) Make a random folder
2) mount /dev/sdb1 newfolder
3) save all critical data
4) work on fixing array
My first suggestion for fixing the array would be to run "mdadm -A --scan" that should find and assemble any raid arrays.
I also seeing that your using /sdb1 and /sdf1 are you sure that they are the correct drives? The lettering system in /dev for block devices is not guaranteed; in other words your /dev/sda could be /dev/sdb the next boot.
*edit: made saving data the first step.
Last edited by tpolich (2011-06-26 05:31:24)
Offline
Thanks for the response.
With regards to the partition numbers - those should be correct. I will deal with UUIDs once I get the problem fixed but /dev/sdx format is easier to deal with at this stage.
Recovering data is definitely important - This is my attempt at mounting:
# mount /dev/sdb1 temp/
mount: you must specify the filesystem type
# mount -t ext4 /dev/sdb1 temp/
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
While dmesg shows:
[24173.932273] EXT4-fs (sdb1): VFS: Can't find ext4 filesystem
[24173.963027] EXT2-fs (sdb1): error: can't find an ext2 filesystem on dev sdb1.
[24180.738741] EXT4-fs (sdb1): VFS: Can't find ext4 filesystem
I figured you couldn't mount a drive array without building the array first but I don't know a lot so that could be wrong.
Running mdadm -A --scan shows the following:
# mdadm -A --scan -v
mdadm: looking for devices for further assembly
mdadm: no recogniseable superblock on /dev/sdb1
mdadm: Cannot assemble mbr metadata on /dev/sdb
mdadm: cannot open device /dev/sdc2: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdc1
mdadm: cannot open device /dev/sdc: Device or resource busy
mdadm: cannot open device /dev/sde1: Device or resource busy
mdadm: cannot open device /dev/sde: Device or resource busy
mdadm: no recogniseable superblock on /dev/sdf1
mdadm: Cannot assemble mbr metadata on /dev/sdf
mdadm: cannot open device /dev/sdd1: Device or resource busy
mdadm: cannot open device /dev/sdd: Device or resource busy
mdadm: cannot open device /dev/sda6: Device or resource busy
mdadm: cannot open device /dev/sda5: Device or resource busy
mdadm: Cannot assemble mbr metadata on /dev/sda4
mdadm: cannot open device /dev/sda3: Device or resource busy
mdadm: cannot open device /dev/sda2: Device or resource busy
mdadm: cannot open device /dev/sda1: Device or resource busy
mdadm: cannot open device /dev/sda: Device or resource busy
mdadm: cannot open device /dev/sdg1: Device or resource busy
mdadm: cannot open device /dev/sdg: Device or resource busy
mdadm: No arrays found in config file or automatically
Critical lines:
mdadm: no recogniseable superblock on /dev/sdb1
mdadm: no recogniseable superblock on /dev/sdf1
I get a similar message when I do mdadm -A /dev/md0 /dev/sdb1 /dev/sdf1
FYI some more useful information - output of fdisk -l:
# fdisk -l
Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb53f7ed7
Device Boot Start End Blocks Id System
/dev/sdg1 63 1953520064 976760001 7 HPFS/NTFS/exFAT
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sda1 * 63 385559 192748+ 83 Linux
/dev/sda2 385560 195703829 97659135 83 Linux
/dev/sda3 195703830 781642574 292969372+ 83 Linux
/dev/sda4 781642575 910548134 64452780 5 Extended
/dev/sda5 781642638 879301709 48829536 83 Linux
/dev/sda6 879301773 910548134 15623181 83 Linux
Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x89596911
Device Boot Start End Blocks Id System
/dev/sdd1 2048 3907029167 1953513560 83 Linux
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf9d33da2
Device Boot Start End Blocks Id System
/dev/sdf1 2048 1953525167 976761560 fd Linux raid autodetect
Disk /dev/sde: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc1fee383
Device Boot Start End Blocks Id System
/dev/sde1 2048 3907029167 1953513560 83 Linux
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
256 heads, 63 sectors/track, 181688 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdc1 1 4294967295 2147483647+ ee GPT
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb4fb1c53
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect
Critical lines:
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xb4fb1c53
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1953525167 976761560 fd Linux raid autodetect
Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf9d33da2
Device Boot Start End Blocks Id System
/dev/sdf1 2048 1953525167 976761560 fd Linux raid autodetect
Offline
The whole point of raid level one is that you don't lose any data is one disk fails they are straight up mirrors. If you can't mount the drive by itself it most likely means that your super block is missing not only your raid information but the filesystem information as-well.
I found a pretty nice guide on recovering ext4 super blocks if you had working filesystem on the disk you should be able to use this method to recover your supper block.
http://linuxexpresso.wordpress.com/2010 … in-ubuntu/
edit*: didn't post link
Last edited by tpolich (2011-06-27 06:30:17)
Offline
Ah I see - I assumed the superblock was related to the raid array rather than the filesystem.
I followed the guide but ran into problems when trying to restore the superblock:
# e2fsck -b 32768 /dev/sdb1
e2fsck 1.41.14 (22-Dec-2010)
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
I tried all backup blocks but had no luck.
Offline
Just to be clear, have you :
1) created your RAID1 array with data already in place on /dev/sdb1 or /dev/sdf1 ?
OR
2) created the array, formated /dev/md0 in ext4, mounted it and copied data on it ?
OR
3) whatever else I can't think about ? A brief description of what you did and in what order would be helpful in this case
I ask because knowing what caused the breakage often help solving it or recovering the data.
Carpe Diem
Offline
Option 2. I partitioned both hard drives, created the raid array then created the raid array on /dev/md0, mounted the raid array and copied data onto it. The raid array was fully built at the time (confirmed by checking /proc/mdstat).
The process was (roughly) similar to this https://wiki.archlinux.org/index.php/Co … em_to_RAID
Offline
Ah I see - I assumed the superblock was related to the raid array rather than the filesystem.
I followed the guide but ran into problems when trying to restore the superblock:
# e2fsck -b 32768 /dev/sdb1 e2fsck 1.41.14 (22-Dec-2010) e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
I tried all backup blocks but had no luck.
Without being able to recover a single superblock I don't know of a way you can recover the data. The superblock contains all the critical information for the filesystem to function.
I found another interesting guide on recovering the raid superblock that works because mdadm is quite intelligent but I don't think it will help you. If you can't mount a single drive from a raid 1 array it means something other then the raid array is wrong.
Here is the link if you want to give it a try.
Offline
Testdisk has a function to locate superblock backups on ext file systems, you may want to give it a try.
On which partition were your data before migrating to RAID ?
Carpe Diem
Offline
teepee47 wrote:Ah I see - I assumed the superblock was related to the raid array rather than the filesystem.
I followed the guide but ran into problems when trying to restore the superblock:
# e2fsck -b 32768 /dev/sdb1 e2fsck 1.41.14 (22-Dec-2010) e2fsck: Bad magic number in super-block while trying to open /dev/sdb1 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device>
I tried all backup blocks but had no luck.
Without being able to recover a single superblock I don't know of a way you can recover the data. The superblock contains all the critical information for the filesystem to function.
I found another interesting guide on recovering the raid superblock that works because mdadm is quite intelligent but I don't think it will help you. If you can't mount a single drive from a raid 1 array it means something other then the raid array is wrong.
Here is the link if you want to give it a try.
Interesting - I might try this if I run out of other ideas. Another thing I was considering trying is making a new file system but forcing it to write the superblock only. Hopefully it should be the same as the old superblock and I will be able to access my data. On the plus side - I have two hard drives so I have two attempts.
Testdisk has a function to locate superblock backups on ext file systems, you may want to give it a try.
On which partition were your data before migrating to RAID ?
Yes I just gave testdisk a try - I did deep search on the hard drive and it didn't see any files. I assume that was what I was supposed to do - didn't see anything about superblocks options etc.
I had the data on a different hard drive then created the array and copied the data onto the array before removing it from the old hard drive.
Offline
I had the data on a different hard drive then created the array and copied the data onto the array before removing it from the old hard drive.
If you didn't write too much data to the old drive Testdisk and/or Photorec (wich is part of the testdisk package) can help you recover the deleted files from it.
Carpe Diem
Offline
Ah, of course! I fear I've written too much data to the old drive but it's certainly worth a try.
Offline
I just realised that i hadn't posted the results of my recovery attempt here.
I was able to recover all my files using Photorec on one of the drives from the failed raid array (Testdisk alone was no use). After I had recovered the files, I tried to rebuild the raid array and then format the filesystem as I had done previously except I used the -S flag for mkfs to write the superblock only. This did not work at all. I have now rebuilt the raid array from scratch and copied the recovered files back on there. For the time being I will have another level of redundancy (daily backup script to another hard drive) until I can feel confident that I will not have this problem again.
That being said, I still have no idea what could have caused the initial problem. If anyone has any ideas please let me know. I will now mark this thread as solved - even though the initial problem was never properly resolved... Thanks to NSB-fr and tpolich for their advice.
Last edited by teepee47 (2011-07-10 08:38:17)
Offline