You are not logged in.
I have a notebook with a 29.8G SSD. I created a 10G cachepool for the root partition (100G ext4) and another cachepool in the remaining space for home (675G ext4).
Occassionaly I started to see the message "A start job is running for dev-disk-by..." and "A start job is running for LVM2 PV scan on device 8:{17,5}", but after the update to systemd 230-5 the problem started to occur with a high frequency. I had to reboot the machine much times (about 20x) to get the system pass this point, mount home, and load the rest of the system. Normally I only susped the system, but the times that I need to reboot the annoyance get back and let we go with reboots...
I tried every different configuration that I find in a google search like this: https://www.google.com.br/search?q=lvm+ … ob+running
None worked, so I have the idea to disable the caches... reboot and voila! Can be a boot lucky, let's try again... 5 reboots and 5 success, wow great!
So I decided to give the cache another try, destroying and recreating the cachepools. Fu**, boot loop hell again... disable the cache only for home and voila again!
Now I'm using the entire SSD to cache my root partition, but I also would like to cache the home partition, so here I come because I don't find any trace for why the problem occur in the first place, but tinkers always have ideas :)
Thanks!
Last edited by cerdiogenes (2016-07-30 21:05:42)
Offline
Seems like it may be related to https://bugs.archlinux.org/task/49530#comment148896
PV scan and lvmetad fails only (in my case) if I use lvm cache.
Offline
lvm cache does not work properly for some time now. Get 150-1 from here, it's last version that works.
Offline
@Rethill is it still the case that no one is in contact with upstream about this issue?
Offline
@loqs looks like it. I don't have enough knowledge and insight in lvm mechanics to be the one, sadly.
Offline
I have the same problem but just with the arch-lts kernel, the 4.6.4-1-ARCH works fine.
journal from failed boot:
...
Jul 23 00:12:42 extreme9 lvmetad[466]: Cannot lock lockfile [/run/lvmetad.pid], error was [Resource temporarily unavailable]
Jul 23 00:12:42 extreme9 lvmetad[466]: Failed to acquire lock on /run/lvmetad.pid. Already running?
...
Jul 23 00:14:11 extreme9 systemd[1]: dev-mapper-vg00\x2dhome.device: Job dev-mapper-vg00\x2dhome.device/start timed out.
Jul 23 00:14:11 extreme9 systemd[1]: Timed out waiting for device dev-mapper-vg00\x2dhome.device.
Jul 23 00:14:11 extreme9 systemd[1]: Dependency failed for File System Check on /dev/mapper/vg00-home.
Jul 23 00:14:11 extreme9 systemd[1]: Dependency failed for /home.
issue still presist after the update to lvm2 2.02.161-1 today.
Offline
In the last kernel update my machine didn't boot because it was unable to mount my EFI partition saying that vfat was an unknow partition. During the process I discovered that I was mounting the EFI partition in /boot/efi and the kernel image updated by mkinitcpio was being update in /boot/initramfs-linux-lts.img, but grub was loading it from /boot/efi/. There is a page in the wiki describing various alternatives and the simple one is mount the EFI partition in /boot: https://wiki.archlinux.org/index.php/EF … up_EFISTUB
After doing it, I recreate my caches, reboot 5 times and the system doesn't hang in none of them. Yup!
Edit: Still working with linux 4.4.16-1-lts and systemd 231.
Last edited by cerdiogenes (2016-07-31 14:12:43)
Offline