You are not logged in.
I tried disabling lvmetad "the nice way" in file /etc/lvm/lvm.conf. I changed the following line from 1 to 0:
use_lvmetad = 0
When booting, there is no "red" systemd boot message signalling failure to connect to lvmetad (such messages might appear if you mask the service and the socket). lvmetad.service is inactive by default (I did not mask it):
$ sudo systemctl status lvm2-lvmetad.service
● lvm2-lvmetad.service - LVM2 metadata daemon
Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.service; static; vendor preset: disabled)
Active: inactive (dead)
Docs: man:lvmetad(8)
lvmetad.socket is still active (I did not mask it either):
$ sudo systemctl status lvm2-lvmetad.socket
● lvm2-lvmetad.socket - LVM2 metadata daemon socket
Loaded: loaded (/usr/lib/systemd/system/lvm2-lvmetad.socket; static; vendor preset: disabled)
Active: active (listening) since Fri 2019-03-22 07:58:35 CET; 7min ago
Docs: man:lvmetad(8)
Listen: /run/lvm/lvmetad.socket (Stream)
CGroup: /system.slice/lvm2-lvmetad.socket
I tried rebooting a couple of times and did not observe any issues (kernel 5.0.3-arch1-1-ARCH). Also, LVM volumes seem to work normally. So does automounting them.
Offline
Same for me until the initram is rebuilt. Could you please run "mkinitcpio -p linux“ and reboot?
Offline
Same for me until the initram is rebuilt. Could you please run "mkinitcpio -p linux“ and reboot?
I've done it and booted normally upon restart. My root partition is a basic partition, not managed by LVM, though.
Offline
Disabling lvmetad in your config however might cause the system to fail booting if your root is on an lvm volume (note: It works until you update your kernel or rebuild your initram - no idea why).
Because next time you run mkinitcpio it turns use_lvmetad = 0 in initram too, and the 'lvm2' initcpio hook is built exclusively around lvmetad. It has no fall back if use_lvmetad = 0 in lvm.conf, so it stops working altogether.
If you get dropped into the initramfs shell you have to activate and mount the LVM yourself using these commands:
lvm vgchange -a y
mount /dev/mapper/yourvg-rootlv /new_root
exit
then it will continue booting like normal.
thus I changed my /etc/initcpio/hooks/lvm2 to this
#!/usr/bin/ash
run_hook() {
lvm vgchange -a y
}
and after re-running mkinitcpio it works for me ( note if you also use mdadm or encrypt, the lvm2 hook has to come later in your mkinitcpio.conf - before, the lvm2 hook order did not matter )
note this may be glossing over some of the finer details of LVM - which I'm not using
edit: lvm2-2.03 also needs "add_runscript" added to install/lvm2 hook, see https://bbs.archlinux.org/viewtopic.php … 4#p1959024
Last edited by frostschutz (2021-02-27 18:34:48)
Offline
You my friend are my hero! It is now so obvious what goes wrong. Thanks a lot!
I slightly changed the script by adding the sysinit param:
#!/bin/sh
run_hook() {
lvm vgchange --sysinit -a y
}
Otherwise dmeventd will error on boot.
Edit additional note: I am using LVMs RAID implementation (mostly level 5), thin provisioning and normal volumes and except for /boot everything is on an lv. So this successfully takes care of all these use cases without any problem. In case anyone reads this later. :-)
Last edited by Swiggles (2019-03-22 15:44:16)
Offline
Glad to see the issue is resolved for everyone. I am marking the thread as solved.
Offline
Offline
With 2.02.184-4 disabling LVM completely via "use_lvmetad = 0" is no longer necessary.
The service unit for lvmetad now contains "Before=shutdown.target" which fixes the delays.
See also:
https://github.com/lvmteam/lvm2/issues/17
https://sourceware.org/git/?p=lvm2.git; … bda5d4d6f9
Offline