You are not logged in.
This is my system when booting appropriately. I use LVM on LUKS, with the sd-encrypt and lvm2 hooks, since I have one vg with two encypted pvs:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1,8T 0 disk
└─hddRoot 254:0 0 1,8T 0 crypt
├─Root-home 254:4 0 1,3T 0 lvm /home
└─Root-docker 254:5 0 128G 0 lvm /var/lib/docker
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1p1 259:1 0 256M 0 part /boot
└─nvme0n1p2 259:2 0 238,2G 0 part
└─cryptlvm 254:1 0 238,2G 0 crypt
├─Root-root 254:2 0 64G 0 lvm /
└─Root-swap 254:3 0 20G 0 lvm [SWAP]
I can cache any of the LVs in hddRoot and it works until reboot:
# lvcreate --type cache --cachemode writeback -L 128G -n cacheHome Root/home /dev/mapper/cryptlvm
However, when I try to reboot, the services for at least one of LVs in hddRoot timeout and the lvm-activate-Root service fails with a Manual repair required on the cache Pool:
× lvm-activate-Root.service - /usr/bin/lvm vgchange -aay --nohints Root
Loaded: loaded (/run/systemd/transient/lvm-activate-Root.service; transient)
Transient: yes
Active: failed (Result: exit-code) since Tue 2021-11-16 08:31:17 -03; 2min 32s ago
Process: 391 ExecStart=/usr/bin/lvm vgchange -aay --nohints Root (code=exited, status=5)
Main PID: 391 (code=exited, status=5)
CPU: 254ms
nov 16 08:31:16 archlinux systemd[1]: Started /usr/bin/lvm vgchange -aay --nohints Root.
nov 16 08:31:16 archlinux lvm[391]: wait4 child process 429 failed: Interrupted system call
nov 16 08:31:16 archlinux systemd[1]: Stopping /usr/bin/lvm vgchange -aay --nohints Root...
nov 16 08:31:16 archlinux lvm[391]: Check of pool Root/cacheHome_cpool failed (status:-1). Manual repair required!
nov 16 08:31:16 archlinux lvm[391]: device-mapper: remove ioctl on (254:5) failed: Device or resource busy
nov 16 08:31:16 archlinux lvm[391]: Interrupted...
nov 16 08:31:16 archlinux lvm[391]: 2 logical volume(s) in volume group "Root" now active
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 1,8T 0 disk
└─hddRoot 254:0 0 1,8T 0 crypt
└─Root-home_corig 254:6 0 1,3T 0 lvm
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1p1 259:1 0 256M 0 part /boot
└─nvme0n1p2 259:2 0 238,2G 0 part
└─cryptlvm 254:1 0 238,2G 0 crypt
├─Root-root 254:2 0 64G 0 lvm /
├─Root-swap 254:3 0 20G 0 lvm [SWAP]
└─Root-cacheHome_cpool_cmeta 254:5 0 28M 0 lvm
If I uncache Root/home and restart lvm-activate-Root service, I can boot the system.
Offline
Just discovered that this does not happen when creating snapshots, but it happens even when the cache is in the same pv of the _corig lv (just for sake of testing).
Offline
However, the same behaviour is observed when I try to set a thin pool: Works until reboot, but can't be set by autogenerated lvm-activate-Root.service on boot time. Whatever it is, it's related to pools, but I am having a hard time trying to debug it
Offline
Tried to do some derivations of this solution, but since the lvm2-lvmetad.service isn't generated (event with global/use_lvmetad=1 in /etc/lvm/lvm.conf), I wasn't exactly successful on this task.
Offline
I just migrated from grub to systemd-boot. Not sure what I expected with it, since the problem seems to be after the kernel boot. Could it be some lvm2 bug. Or some kind of misconfiguration?
Offline
If no one has replied since your last post you can simply edit your post to append in new information rather than serial post like that.
Random shot in the dark -- have you merged all your pacnew files? I know there have been a lot of changes to lvm.conf.
"the wind-blown way, wanna win? don't play"
Offline
If no one has replied since your last post you can simply edit your post to append in new information rather than serial post like that.
That's good advice
Random shot in the dark -- have you merged all your pacnew files? I know there have been a lot of changes to lvm.conf.
This is a clean install from no more than two weeks ago, so the only installed version of lvm2 is the most recent one. Considering that issue, I just downgraded lvm2 (using the Arch Linux Archive) and discovered that everything works as expected when lvm2 is 2.03.11-5 or earlier.
It is definitely a workaround and it is looking more like a lvm2 bug now, but doesn't seem like a solved issue.
Last edited by tapajos (2021-11-19 23:16:18)
Offline