You are not logged in.
Hi guys
I have a setup of 1x SSD with arch (luks) and 4x 1TB HDD's configured as 10 RAID via mdadm as datastore.
This setup works flawless until i disconnected all HDD drives (because i had to test something with another HDD).
So, i disconnected all HDD's (including the 4x 1TB HDD's), bootet and tested some stuff with the other HDD.
Then i shutdown again, connected all the raid HDD's again and bootet into arch.
From this point on, the raid seems to be lost
mdadm --assemble --scan
mdadm: No arrays found in config file or automatically
The 4 HDD's (previously configured as raid 10) are fine and recognized, but not any more as raid.
Unfortunately i have no clue how exactly i configured the raid 10 (which HDD had which position configured) - this was years ago, and i did not document this..
Does somebody know any way to get the exact configuration, so that i could recreate the array again?
Maybe the raid information is saved anywhere in the system? (/etc/mdadm.conf only have the defaults).
All the 4 HDD's output following via `mdadm --examine`
sudo mdadm --examine /dev/sdX
/dev/sdX:
MBR Magic : aa55
Partition[0] : 1953525167 sectors at 1 (type ee)
Kindly regards
Offline
Do you remember if you used raid 1+0 or MD raid10 ?
see https://wiki.archlinux.org/title/RAID#N … AID_levels
The output of fdisk -l for the 4 disks (run as root) may be helpful.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
Do you remember if you used raid 1+0 or MD raid10 ?
I definitely created one raid 10 with 4 devices.
The output of fdisk -l for the 4 disks (run as root) may be helpful.
Disk /dev/sdd: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: TOSHIBA MQ01ABD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: {uuid}
Disk /dev/sde: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10JPVX-22J
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: {uuid}
Disk /dev/sdf: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: WDC WD10JPVX-22J
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: {uuid}
Disk /dev/sdg: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: TOSHIBA MQ01ABD1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: {uuid}
Kindly regards
Offline
All the 4 HDD's output following via `mdadm --examine`
sudo mdadm --examine /dev/sdX /dev/sdX: MBR Magic : aa55 Partition[0] : 1953525167 sectors at 1 (type ee)
That's a partition table... does examine yield anything for the partitions?
Did you use raid on partitions or full disk?
----
If you used full disk RAID and now the mdadm headers are wiped by GPT partition table, you're yet another victim of "something created a partition table"...
Never use full disk for anything, always use a partition table.
If you don't have examine output (even old ones) you're pretty much left with trial and error then. You'll have to guess the raid settings (or derive from raw data). Use overlays for your experiments.
I wrote an overview to mdadm --create on Unix Stackexchange here: https://unix.stackexchange.com/a/131927/30851
It also links to the kernel raid wiki guide: https://raid.wiki.kernel.org/index.php/ … erlay_file
Good luck
Last edited by frostschutz (2022-02-27 14:54:34)
Online
does examine yield anything for the partitions?
Did you use raid on partitions or full disk?
I used the full disks, no partitions.
After i created the array, i completely encrypted the mounted raid volume with luks.
Did not hear to use partitions in between before - seems i'll have to read more about, thanks for your suggestion!
Offline