You are not logged in.
My raid-controller 3ware (now LSI) 9650SE-4LPML is broken due to a over-voltage or something. The disks are fully working.
I plugged the two disks on my motherboard's sata ports and tried to reassemble the array with mdadm with the following steps:
1.) examine the disks
$ sudo blkid
..
/dev/sdd: TYPE="isw_raid_member"
/dev/sde: TYPE="isw_raid_member"
..
$ sudo cat /proc/partitions | grep 'sdd\|sde'
8 48 1953514584 sdd
8 64 1953514584 sde
$ sudo mdadm --examine --verbose --metadata=imsm /dev/sd[d-e]
/dev/sdd:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : f238be7c
Family : f238be7c
Generation : 00000000
Attributes : All supported
UUID : 10d65f2e:c6be80c8:fc19e38d:bf8bad5e
Checksum : 8af7c963 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk01 Serial : WD-WCAVY1518064
State : active
Id : 00050000
Usable Size : 3907022862 (1863.01 GiB 2000.40 GB)
[raidstor]:
UUID : 99f7b7bd:c281070f:eeff6900:d7a0aae2
RAID Level : 0
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 1
Array Size : 7814039228 (3726.02 GiB 4000.79 GB)
Per Dev Size : 3907019784 (1863.01 GiB 2000.39 GB)
Sector Offset : 0
Num Stripes : 15261795
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk00 Serial : WD-WCAVY1522215
State : active
Id : 00040000
Usable Size : 3907022862 (1863.01 GiB 2000.40 GB)
/dev/sde:
Magic : Intel Raid ISM Cfg Sig.
Version : 1.0.00
Orig Family : f238be7c
Family : f238be7c
Generation : 00000000
Attributes : All supported
UUID : 10d65f2e:c6be80c8:fc19e38d:bf8bad5e
Checksum : 8af7c963 correct
MPB Sectors : 1
Disks : 2
RAID Devices : 1
Disk00 Serial : WD-WCAVY1522215
State : active
Id : 00040000
Usable Size : 3907022862 (1863.01 GiB 2000.40 GB)
[raidstor]:
UUID : 99f7b7bd:c281070f:eeff6900:d7a0aae2
RAID Level : 0
Members : 2
Slots : [UU]
Failed disk : none
This Slot : 0
Array Size : 7814039228 (3726.02 GiB 4000.79 GB)
Per Dev Size : 3907019784 (1863.01 GiB 2000.39 GB)
Sector Offset : 0
Num Stripes : 15261795
Chunk Size : 128 KiB
Reserved : 0
Migrate State : idle
Map State : normal
Dirty State : clean
Disk01 Serial : WD-WCAVY1518064
State : active
Id : 00050000
Usable Size : 3907022862 (1863.01 GiB 2000.40 GB)
2.) assemble sdd and sde (doesn't work!)
$ export IMSM_NO_PLATFORM=1 && sudo mdadm --assemble /dev/md/raidstor --force --verbose --readonly --name=raidstor --uuid 10d65f2e:c6be80c8:fc19e38d:bf8bad5e /dev/sd[d-e]
mdadm: looking for devices for /dev/md/raidstor
mdadm: No OROM/EFI properties for /dev/sdd
mdadm: no RAID superblock on /dev/sdd
mdadm: No OROM/EFI properties for /dev/sde
mdadm: no RAID superblock on /dev/sde
3.) optional: compile md_mod into kernel (assembly doesn't work either)
$ sudo vim /etc/mkinitcpio.conf
MODULES="md_mod"
$ sudo /usr/bin/mkinitcpio -p linux
$ sudo reboot
$ lsmod | grep md_mod
md_mod 122880 0
Is there a possibility to assemble them? The two disks have the same UUID. Thats curious, but maybe the way this controller works with.
I only want to copy the data from the disks to an other. How can I go further? Any Idea?
Thanks, Tobias
Last edited by archtobi (2017-03-22 21:01:31)
Offline
Hardware RAID and software RAID (mdadm) usually aren't compatible with each other, you'll probably need to get a replacement hardware RAID controller of the exact same make/model to reassemble your array.
Although it will be quicker/easier/cheaper to just restore the data from your last backup.
Last edited by Slithery (2017-03-22 21:09:54)
Offline
Does -e imsm help? It looks like mdadm -A for some reason doesn't recognize the superblock even though -E does.
$ sudo vim /etc/mkinitcpio.conf
MODULES="md_mod"
$ sudo /usr/bin/mkinitcpio -p linux
$ sudo reboot
Oh no, that's complete waste of time. modprobe md_mod would achieve the same without rebooting.
Offline
Does -e imsm help?
No, the same result.
...you'll probably need to get a replacement hardware RAID controller of the exact same make/model to reassemble your array.
Although it will be quicker/easier/cheaper to just restore the data from your last backup.
Yes, I think so!
Thanks for your reply.
archtobi
Offline
Or use the wonders of Google:
https://www.spinics.net/lists/raid/msg39943.html
http://serverfault.com/questions/226053 … inux-mdadm
It seems Linux supports this stuff and people got it to work though it's a bit more complicated than native Linux RAID.
Last edited by mich41 (2017-03-24 20:52:14)
Offline
Which RAID level? Your output said RAID 0 and chunk size 128KiB, at offset 0. Is that correct? Or are we just looking at the wrong data, real HW RAID controller is not Intel IMSM format, is it?
If you want to play with mdadm, use overlays. https://raid.wiki.kernel.org/index.php/ … erlay_file With overlays you can experiment w/o actually writing to the disks.
A simple RAID 0 without any offset involved should be able to --build (legacy array, no metadata):
# mdadm --build /dev/md42 --level=0 --chunk=128 --raid-devices=2 /dev/mapper/overlay_sde /dev/mapper/overlay_sdf
mdadm: array /dev/md42 built and started.
(disk order may be the other way around)
you'll probably need to get a replacement hardware RAID controller of the exact same make/model to reassemble your array
Most HW RAID can be made to work with mdadm - barring some obscure modes, the Linux RAID layer supports a lot - but this is best investigated while the RAID controller is still working and not when shit already hit the fan.
You need to know the exact RAID layout / settings / offsets / disk orders.
If the data is not encrypted this can also be deducted (with effort) from the raw data or simply through experimentation (if it mounts, and a large file turns out to be intact, chances are you got it right.)
Last edited by frostschutz (2017-03-24 21:01:14)
Offline