You are not logged in.
I am trying to recover data from partitions on a RAID 1 array.
However, when I try to mount either of the partitions (/dev/sdc1 or /dev/sdc2), I get the error "mount: unknown filesystem type 'linux_raid_member'".
the mount command I used was (I did the same for sdc2):
mount /dev/sdc1 /mnt/backup
I also ran 'sudo smart ctl' and got "SMART Health Status: OK" for both partitions. So I think I can rule out disk health issues.
I tried 'mdadm --assemble --scan' and got:
mdadm: failed to add /dev/sdc1 to /dev/md/1: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md/1: Invalid argument
mdadm: failed to add /dev/sdc2 to /dev/md/2: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md/2: Invalid argument
This is the output of 'sudo fdisk -l /dev/sdc':
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A7D7BB3-7070-2141-83F2-69FDC85C9E69Device Start End Sectors Size Type
/dev/sdc1 2000896 11990919 9990024 4.8G Linux RAID
/dev/sdc2 12000002 13998857 1998856 976M Linux RAID
the output of 'sudo mdadm --examine /dev/sdc1':
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3d41c60c:405653fd:39c88afe:9fe03176
Name : LS421DE-EM4F5:1
Creation Time : Wed Oct 31 01:03:49 2007
Raid Level : raid1
Raid Devices : 2Avail Dev Size : 9990144 (4.76 GiB 5.11 GB)
Array Size : 4995008 (4.76 GiB 5.11 GB)
Used Dev Size : 9990016 (4.76 GiB 5.11 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8112 sectors, after=18446744073709543432 sectors
State : clean
Device UUID : e2450711:ceb78514:cad2b878:87dabe05Update Time : Tue Mar 24 18:34:56 2015
Checksum : f1310b3 - correct
Events : 239666Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
and the output of 'sudo mdadm --examine /dev/sdc2':
/dev/sdc2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : b2c77f77:9d9ae7ac:313e5f4a:a378df2a
Name : LS421DE-EM4F5:2
Creation Time : Wed Oct 31 01:03:49 2007
Raid Level : raid1
Raid Devices : 2Avail Dev Size : 1998975 (976.23 MiB 1023.48 MB)
Array Size : 999424 (976.16 MiB 1023.41 MB)
Used Dev Size : 1998848 (976.16 MiB 1023.41 MB)
Data Offset : 1024 sectors
Super Offset : 8 sectors
Unused Space : before=944 sectors, after=18446744073709550600 sectors
State : clean
Device UUID : fd4ca8dc:5994b788:4d6537ff:d4dbe0a9Update Time : Tue Mar 24 18:10:31 2015
Checksum : 935fba7c - correct
Events : 624Device Role : Active device 0
Array State : A. ('A' == active, '.' == missing, 'R' == replacing)
Please help me. Anything is appreciated. Thanks in advance.
EDIT: To be sure, I should have more data on the disk than what is shown in the partitions. I've already ran 'testdisk' on this disk.
Last edited by mimaste7 (2015-03-29 05:24:56)
Offline
However, when I try to mount either of the partitions (/dev/sdc1 or /dev/sdc2), I get the error "mount: unknown filesystem type 'linux_raid_member'".
This message simply means that the device is member of a RAID. You are supposed to start the RAID and then mount the RAID, not mount the underlying device.
If for some reason the RAID can not be started (check /proc/mdstat as well, you might have to --stop before you --assemble --scan), then you can circumvent this by using a loop mount
mount -o ro,loop,offset=$((8192*512)) /dev/sdc1 /mnt/sdc1
The offset in question is what's shown in the mdadm --examine output as "Data Offset".
Of course this requires a filesystem to be on the RAID; if it's something else, like LVM or LUKS, you have to go through losetup and then enable the other layers individually.
I'm not sure what to do about wrong partition sizes; you might need TestDisk to check if other partitions can be found.
Last edited by frostschutz (2015-03-29 14:02:43)
Offline
Thanks for the reply!!
I have already ran testdisk. It attempted to restore the partitions, but I'm not 100% sure how effective it was.
In any case, I tried 'sudo mount -o ro,loop,offset=$((8192*512)) /dev/sdb1 /mnt/backup/' and I got the following error
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other errorIn some cases useful info is found in syslog - try
dmesg | tail or so.
I also ran 'sudo mount -o ro,loop,offset=$((1024*512)) /dev/sdb2 /mnt/backup/' and got this error:
mount: unknown filesystem type 'swap'
Which seems odd because I don't remember ever setting up a swap space on this disk.
Here's the output from 'dmesg | tail -25':
[ 4356.952269] scsi host6: usb-storage 7-1:1.0
[ 4357.955511] scsi 6:0:0:0: Direct-Access ST4000DM 000-1F2168 CC52 PQ: 0 ANSI: 5
[ 4357.956939] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4357.957263] sd 6:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[ 4357.959435] sd 6:0:0:0: [sdb] Write Protect is off
[ 4357.959451] sd 6:0:0:0: [sdb] Mode Sense: 28 00 00 00
[ 4357.962322] sd 6:0:0:0: [sdb] No Caching mode page found
[ 4357.962335] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[ 4357.963301] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4358.000908] sdb: sdb1 sdb2
[ 4358.002117] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4358.003882] sd 6:0:0:0: [sdb] Attached SCSI disk
[ 4358.084885] md: sdb1 does not have a valid v1.2 superblock, not importing!
[ 4358.084904] md: md_import_device returned -22
[ 4358.084975] md: md1 stopped.
[ 4358.108807] md: sdb2 does not have a valid v1.2 superblock, not importing!
[ 4358.108878] md: md_import_device returned -22
[ 4358.108998] md: md2 stopped.
[ 4538.771028] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 4538.771275] EXT4-fs (loop0): bad geometry: block count 1248752 exceeds size of device (1247729 blocks)
Output from 'cat /proc/mdstat'
Personalities :
unused devices: <none>
To my knowledge, this is not a LVM/LUKS setup. I bought a Buffalo NAS and it used the default RAID configuration (the Buffalo support guy (higher than Level 1 support, for what its worth) told me the default is RAID 1, which makes sense since that's what 'mdadm --examine' says).
I apologize for my lack of knowledge about all this. I'm fresh meat to NAS/RAID/etc.
Offline
[ 4538.771028] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 4538.771275] EXT4-fs (loop0): bad geometry: block count 1248752 exceeds size of device (1247729 blocks)
Well, that just means the partition is a little too small.
If you add the partition offset as well you can use /dev/sdx instead of /dev/sdx1, that gives you a loop device limited only by the disk size.
Offline