You are not logged in.
I created a RAID10 with encryption with luks but recently I see one of the disks fell out of the array.
$ cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdd1[2] sdc1[1] sdb1[0]
7813608448 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
bitmap: 10/59 pages [40KB], 65536KB chunk
unused devices: <none>
The drive passed a smartctl long test without error and I need to add it back and rebuild the array. I am confused by this output though.
$ lsblk -f
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sdb
└─sdb1 linux_raid_member 1.2 storage:0 04ea48bf-546e-464b-8802-8668f9d4c8cb
└─md0 crypto_LUKS 1 15aa1748-c14b-464d-ac1b-f83c9eaf9328
sdc
└─sdc1 linux_raid_member 1.2 storage:0 04ea48bf-546e-464b-8802-8668f9d4c8cb
└─md0 crypto_LUKS 1 15aa1748-c14b-464d-ac1b-f83c9eaf9328
sdd
└─sdd1 linux_raid_member 1.2 storage:0 04ea48bf-546e-464b-8802-8668f9d4c8cb
└─md0 crypto_LUKS 1 15aa1748-c14b-464d-ac1b-f83c9eaf9328
sde
└─sde1 linux_raid_member 1.2 storage:0 04ea48bf-546e-464b-8802-8668f9d4c8cb
Why is sde1 shown as a raid member but not using the crypto_LUKS association? Is it due to the fact that it fell out of the array? If that is true, is the right way to add it back to the array to simply run:
# mdadm --add /dev/md0 /dev/sde1
Last edited by MS1 (2025-02-05 01:17:53)
Offline
Why is sde1 shown as a raid member but not using the crypto_LUKS association?
lsblk does not show it because sde1 is currently not actively taking part in your raid. raid member on the other hand is just the metadata on disk.
with any luck, mdadm --re-add or --add will work, and follow with a resync (progress shown in /proc/mdstat)
it would be interesting to know why the drive was kicked out. check your logs?
Last edited by frostschutz (2025-02-04 15:02:56)
Offline
https://wiki.archlinux.org/title/RAID#A … _the_array
have you tried
sudo mdadm --assemble --scan
?
also: https://wiki.archlinux.org/title/RAID#R … n_the_raid
Last edited by cryptearth (2025-02-04 16:14:03)
Offline
I am backing up critical data all day. I was thinking to try when it is finished
mdadm --re-add /dev/md0 /dev/sde1
I think the wiki command below is for adding a new disk not one that already has meta data on it but I am no expert
mdadm --manage --add /dev/md0 /dev/sde1
Offline
Using the --re-add command was the right move. The array is rebuilding now. As to why it got knocked out, I think it is because I unplugged on of the drives while doing some cable maintenance and forgot to plug it back in. Then I mounted the array and it ran for 3/4 disks for a while and I did not realize.
Offline
usually I would recommend using zfs - but from some weird issues zfs seem to have issues re-adding a drive was a member previously but got kicked or lost - at least without wiping the meta data from with like wipefs
which to be fair makes sense a drive that gets kicked or lost usually gets replaced anyway
seems md does handle user error more gracefully
anyway I still recommend both zfs and raid6/raidz2 over a raid10/multiple mirrors just for the sake of: a stripe of mirrors can only survive if one drive per mirror fails - in a dual-parity it doesn't matter which two drives fail
Offline