You are not logged in.

#1 2023-05-06 02:17:50

ZSmith
Member
Registered: 2017-02-25
Posts: 16

LVM RAID 5 Array Fails When One Drive Removed

My root filesystem was previously stored on an LVM LV that was composed of a single PV with a SSD Cache. I recently attempted to convert this LV to a RAID5 array with 3 disks by removing the cache and then using the command:

lvconvert --verbose --type raid5 --stripes 2 /dev/VG/rootVol

I had to issue this command twice because LVM wanted to convert to a RAID1 before converting to a RAID5.

After waiting for the disks to sync ( "lvs -a -o name,sync_percent" gives 100% ), I attempted to test the data redundancy by powering off the system, disconnecting a drive, and restarting the system. With only a single drive removed, the LV is unable to be brought online and the system will not boot. It is also not possible to mount the LV when booted into a recovery drive. Thankfully, the system does recover when the missing drive is reconnected.

Not sure what I did wrong. Is there some gottcha step I missed in the process of converting to RAID5 that would cause the array to not be created properly?

lvs -a
  LV                   VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]      VG ewi------- 100.00m                                                    
  rootVol              VG rwi-aor---  <1.76t                                    100.00          
  rootVolCache         VG Cwi---C--- 100.00g                                                    
  [rootVolCache_cdata] VG Cwi------- 100.00g                                                    
  [rootVolCache_cmeta] VG ewi------- 100.00m                                                    
  [rootVol_rimage_0]   VG iwi-aor--- 900.00g                                                    
  [rootVol_rimage_1]   VG iwi-aor--- 900.00g                                                    
  [rootVol_rimage_2]   VG iwi-aor--- 900.00g                                                    
  [rootVol_rmeta_0]    VG ewi-aor---   4.00m                                                    
  [rootVol_rmeta_1]    VG ewi-aor---   4.00m                                                    
  [rootVol_rmeta_2]    VG ewi-aor---   4.00m 

Last edited by ZSmith (2023-05-06 03:42:53)

Offline

#2 2023-05-06 06:43:34

frostschutz
Member
Registered: 2013-11-15
Posts: 1,417

Re: LVM RAID 5 Array Fails When One Drive Removed

lvm vgchange manpage has this option

--activationmode partial|degraded|complete

degraded allows RAID LVs with missing PVs to be activated.

maybe it would help?

not familiar with LVM raid otherwise. I do all my raid with mdadm directly, LVM on top...

Offline

#3 2023-05-06 13:57:03

ZSmith
Member
Registered: 2017-02-25
Posts: 16

Re: LVM RAID 5 Array Fails When One Drive Removed

I double checked that the activation mode was set correctly, but it didn't make the volume mountable.

Offline

#4 2023-05-07 07:29:24

-thc
Member
Registered: 2017-03-15
Posts: 496

Re: LVM RAID 5 Array Fails When One Drive Removed

What is the output of

lvs -a -o name,size,segtype,datastripes,stripesize,reshapelenle,devices

?

Offline

#5 2023-05-13 14:29:17

ZSmith
Member
Registered: 2017-02-25
Posts: 16

Re: LVM RAID 5 Array Fails When One Drive Removed

The past week has been very strange.

Before I had a chance to execute @-thc's command, I decided to convert the RAID5 array back to a RAID1 array. This operation appeared to complete successfully, but when I rebooted, the array was listed as "partial", even though all the drives were present and operating.

I then gave up trying to convert my existing root filesystem and attempted to create a RAID5 from scratch. This failed in the exact same way as before - the array appeared normal when all three drives are present, but dropped to "partial" when only a single drive was removed.

I then gave up on RAID5 altogether, added a 4TB drive to the system, and attempted to create a 2.5TB RAID1 array. LVM did this by placing one data copy on the 4TB drive, and striping the other data copy across the 3 1TB drives. This appears to work normally, until I remove a drive, which causes the array to appear as "partial" even when one complete data stripe is accessible. However, when I force LVM to mount the partial volume, all of the data appears intact, even in the degenerate cases of the 4TB drive being missing, or all 3 of the 1TB drives being missing.

This has led me to conclude that there is some error in the LVM logic which is causing it to not be able to distinguish between "degraded" and "partial" status of logical volumes and that the actual underlying data storage is working as intended.

I'm still trying to decide what to do about this, but wanted to record my experience in case anyone else ever experienced a similar issue.

Offline

Board footer

Powered by FluxBB