You are not logged in.

#1 2024-10-20 11:11:02

ilblasco
Member
Registered: 2021-02-22
Posts: 7

LVM raid5 inactive - cannot recover

Hello,

After a power loss, I cannot see one of my LVM volumes anymore.
I have 4 HDDs which I have configured as lvm raid1(2 partitions)+raid5(4 partitions).
They are attached via USB to my Raspberry Pi, which I use as a server.
They are mounted at boot and served on my home network with Samba.

Now, only raid1 volume (vg1) is recognized at boot and set as "active". The other one (vg5) is "inactive" and I'm not able to make it work.

What I tried:

[root@mblasco-laptop mblasco]# lvscan
  ACTIVE            '/dev/vg1/lvm_raid1' [465,99 GiB] inherit
  inactive          '/dev/vg5/lvm_raid5' [1,36 TiB] inherit
[root@mblasco-laptop mblasco]# lvchange -ay /dev/vg5/lvm_raid5
  device-mapper: reload ioctl on  (254:15) failed: Errore di input/output
[root@mblasco-laptop mblasco]# pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sdc1  vg5 lvm2 a--  <465,00g 36,00m
  /dev/sdd1  vg5 lvm2 a--   464,96g     0 
  /dev/sde1  vg5 lvm2 a--  <465,00g 36,00m
  /dev/sde2  vg1 lvm2 a--  <466,00g     0 
  /dev/sdh1  vg5 lvm2 a--  <465,00g 36,00m
  /dev/sdh2  vg1 lvm2 a--  <466,00g     0 
[root@mblasco-laptop mblasco]# lvchange -ay --activation-mode degraded /dev/vg5/lvm_raid5
  device-mapper: reload ioctl on  (254:15) failed: Errore di input/output
[root@mblasco-laptop mblasco]# lvchange -ay --force /dev/vg5/lvm_raid5
  device-mapper: reload ioctl on  (254:15) failed: Errore di input/output

As I've never performed "maintenance" operation on my LVM volumes before (no error received so far in 4-5 years of use) I would like to ask if anyone faced the same problems?
Are there any workarounds (removing/adding devices, clean up procedures, etc..) I can try?
I hope I don't have to wipe my drives...
Please let me know if I have to provide additional information. Thank you in advance!

Offline

#2 2024-10-20 17:58:00

cryptearth
Member
Registered: 2024-02-03
Posts: 962

Re: LVM raid5 inactive - cannot recover

well - using partitions in any kind of "raid" (in the more broader term as in "multi drive array") you don't want to use partitions (or files) but only entire drives - so setting up two arrays on the same physical disks was the first mistake and hence should wipe and restore from backup anyway
next: don't use single parity (or raid5) for anything else than temporary to-be-lost - single parity comes with the risk it can't survive a double failure - that is a 2nd drive failing while already rebuilding the array - always use either mirror+stripe (raid10) or dual parity (raid6) - depending on your workload and use case (I personal prefer dual parity over mirror+stripe: raid10 always only gives you 50% at max of useable space and can fail completely if both drives of the same mirror fail at the same time)
although both MD and LVM come a long way - today we have better options like BtrFS (for raid10) or ZFS (for dual parity) - playing along with MD and LVM will bite you (as it already did) as for some reason they're not really fault tolerant: lvm gives you an i/o error of one of the drives - single parity is what should prevent against it - so it should be able to start up hence it fails completely - with BtrFS or ZFS you at least could import the array

TLDR: wipe the drives - setup ONE zfs raidz2 pool and restore from backup - if you want a simple mirror - use additional drives - don't use partitions!

Offline

Board footer

Powered by FluxBB