You are not logged in.
Hi All,
Hopefully somebody can shed some light on this for me. Recently I setup a bridge network on my machine as I want to run some VMs on my physical network. After doing this I happened to run
systemctl
and noticed some failed services. This is the output from
systemctl | grep failed
● lvm2-pvscan@8:32.service loaded failed failed LVM2 PV scan on device 8:32
● lvm2-pvscan@8:48.service loaded failed failed LVM2 PV scan on device 8:48
● lvm2-pvscan@8:80.service loaded failed failed LVM2 PV scan on device 8:80
And here is the details of one of those failed services from
systemctl status lvm2-pvscan@8:32.service
● lvm2-pvscan@8:32.service - LVM2 PV scan on device 8:32
Loaded: loaded (/usr/lib/systemd/system/lvm2-pvscan@.service; static; vendor preset: disabled)
Active: failed (Result: exit-code) since Sat 2016-05-28 11:53:09 BST; 3h 21min ago
Docs: man:pvscan(8)
Process: 453 ExecStart=/usr/bin/lvm pvscan --cache --activate ay %i (code=exited, status=5)
Main PID: 453 (code=exited, status=5)
May 28 11:53:09 IXTREME systemd[1]: Starting LVM2 PV scan on device 8:32...
May 28 11:53:09 IXTREME lvm[453]: Concurrent lvmetad updates failed.
May 28 11:53:09 IXTREME lvm[453]: Failed to update cache.
May 28 11:53:09 IXTREME systemd[1]: lvm2-pvscan@8:32.service: Main process exited, code=exited, status=5/NOTINSTALLED
May 28 11:53:09 IXTREME systemd[1]: Failed to start LVM2 PV scan on device 8:32.
May 28 11:53:09 IXTREME systemd[1]: lvm2-pvscan@8:32.service: Unit entered failed state.
May 28 11:53:09 IXTREME systemd[1]: lvm2-pvscan@8:32.service: Failed with result 'exit-code'.
Obviously I use LVM on my drives. I have an SSD for the OS with a logical volume just for the root fs. This is so I can easily snapshot it before doing anything crazy (have been known to do crazy things!). I also have two HDD added to a volume group for data, with logical volume lvdata. I bind mount my Home folders (Pictures, Movies, Videos, Documents) from lvdata.
Everything seems to be working as normal - I can access all my data. What do these service failures mean?
Any help would be much appreciated.
Last edited by jjb2016 (2016-06-01 10:19:34)
Offline
The error message is pretty self-explanatory...
Not a Sysadmin issue, moving to NC.
Offline
I fail to see how it's self explanatory.
Concurrent lvmetad updates failed. ??
Failed to update cache. ??
Failed to start LVM2 PV scan on device 8:32. ??
What is device 8:32? I've just re-installed Arch on my main drive and I'm still getting these service failures on the first login.
Could my logical volumes be corrupt?
Offline
It is saying that device 8:32 isn't installed or doesn't exist.
Print out your volumes (lvdisplay) and I'll warrant there is no 8:32
Offline
This is not a newbie issue, it is an issue with the update to lvm2-2.02.153.
I had the same issue today on a machine which has been running for more than a year without problems over several reboots.
The machine has two LVs for its data partitions.
Today, after installing the updates, on reboot, I got this message for the second LV:
[FAILED] Failed to start LVM2 PV scan on device 254:65.
The start job for the device failed after a 01:30 timeout and the emergency shell showed up.
There, I could manually do
vgscan
vgchange -ay
and device 254:65 showed up and was mounted automatically.
After downgrading the lvm2 package to the previous version 2.02.149, the system boots as before, without any problems.
Offline
This is not a newbie issue, it is an issue with the update to lvm2-2.02.153.
I had the same issue today on a machine which has been running for more than a year without problems over several reboots.
The machine has two LVs for its data partitions.Today, after installing the updates, on reboot, I got this message for the second LV:
[FAILED] Failed to start LVM2 PV scan on device 254:65.The start job for the device failed after a 01:30 timeout and the emergency shell showed up.
There, I could manually do
vgscan
vgchange -ay
and device 254:65 showed up and was mounted automatically.After downgrading the lvm2 package to the previous version 2.02.149, the system boots as before, without any problems.
Thanks for the confirmation that I'm not a complete dimwit! Although having said that, the following question is a noob corner question: what pacman command do I use to downgrade a package? How do you specify the exact version you want to install?
I'm sure a fix for this will come out soon enough but until then I'd like to downgrade if that fixes the issue. Sadly I've completely wiped my drives to reinstall from scratch (good practice I suppose) but accidentally did not back all the configuration files I needed to reconstruct ... (I was running DHCPD and BIND with dynamic updates .... ugh ...)
Offline
This is not a newbie issue, it is an issue with the update to lvm2-2.02.153.
No issues for me on two different machines running this version of lvm2.
Offline
Since upgrading from lvm2-2.02.150 to lvm2-2.02.153 I have the same issue on my workstation with two volume groups on one physical volume each. In contrast I have no problems on my laptop with only one physical volume and volume group.
When booting the workstation it hangs just like mrxx described. On each reboot "lvm2-pvcsan" will always fail for one or the other (but never both, it seems). When logging into the emergency shell, I can start the failed service with
systemctl start lvm2-pvscan@WHATEVER.service
and resume booting with
systemctl default
After that everything seems to work just fine until the next reboot.
On a hunch I added "lvm2" to the hooks in "/etc/mkinitcpio.conf" on my workstation. As the root partition on my workstation is not on LVM, this was not necessary before. But on my laptop I use LVM for everything except "/boot", so this seemed to be the obvious difference. With the new initramfs my workstation is able to boot to desktop on its own. Unfortunately it does not seem to really solve the problem, as service still fails. Systemd just does not get stuck waiting for the volume group to appear as it already was loaded by the initramfs.
As the "lvm2" hook seems to be at least a workaround, I checked the boot log on my laptop again, but there really were no failed services. Maybe this issue only affects systems with multiple PVs/VGs?
Offline
No issues for me on two different machines running this version of lvm2.
Do you have only one PV on these machines? Adaephon mentions no problems on a machine with a single PV/VG, too.
In contrast, we all see the problem on systems with multiple VGs on different PVs. One is always pvscanned successfully, but pvscanning the others subsequently fails.
I can confirm that the problem is solved by downgrading to version 2.02.150.
what pacman command do I use to downgrade a package?
Following the Arch Wiki: Downgrading packages
pacman -U /var/cache/pacman/pkg/lvm2-2.02.150-1-x86_64.pkg.tar.xz
If you have wiped your cache or setup the machine from scratch, you can either copy this package from another machine's pacman cache (if one is available) or download it from the Arch Linux Archive:
wget https://archive.archlinux.org/repos/2016/05/26/core/os/x86_64/lvm2-2.02.150-1-x86_64.pkg.tar.xz
pacman -U lvm2-2.02.150-1-x86_64.pkg.tar.xz
After downgrading, add 'lvm2' to the IgnorePkg section of /etc/pacman.conf to prevent pacman to upgrade this package again, and keep it there until the problem is resolved:
IgnorePkg = lvm2
Offline
jasonwryan wrote:No issues for me on two different machines running this version of lvm2.
Do you have only one PV on these machines? Adaephon mentions no problems on a machine with a single PV/VG, too.
In contrast, we all see the problem on systems with multiple VGs on different PVs. One is always pvscanned successfully, but pvscanning the others subsequently fails.
Both have single PVs with one or more VGs and multiple LVs.
Offline
Underlying upstream issue: https://bugzilla.redhat.com/show_bug.cgi?id=1334063. Corresponding systemd issue https://github.com/systemd/systemd/issues/3353 where it was agreed that systemd isn't the culprit but should nevertheless get adjusted to mitigate problems like this one of LVM.
I agree with mrxx that this is certainly not a newbie issue.
Offline
I'm running a similar lvm setup and was also seeing a failing service at boot. Downgrading lvm2 solved the issue for now.
I would also suggest, that an update of the lvm package which potentially results in an unbootable system is not a "newbie issue".
Offline
Please stop suggesting whether you think this thread is a Newbie issue. It was moved here because it clearly did not meet the criteria for Sysadmin; read the link in my first post if you are unsure what that means.
Offline
jjb2016, could you please prepend "[SOLVED]" to your first post's subject as we have a working solution for the moment by downgrading the package?
Jason, I agree this is not a 'Sysadmin' topic. Considering the fact that this is a regression caused by an update, maybe it would be a good idea to move this thread to "Pacman & Package Upgrade Issues".
I'm pretty sure more people will be affected by this strange bug when they are updating their systems.
Offline
jjb2016, could you please prepend "[SOLVED]" to your first post's subject as we have a working solution for the moment by downgrading the package?
Is it not already fixed anyway as per 49483?
Offline
Is it not already fixed anyway as per 49483?
Looks promising: "lvm2 2.02.154-3 solved issue."
Alas, this version is still in the testing repo, which does not solve the the problem for production machines, unless one wants to install its dependencies, too from testing.
I look forward to test this release as soon as it hits the core repo.
Offline
Unfortunately lvm2-2.02.154-3 does not appear to solve the problem. It started with lvm2-2.02.153-1 as previous posters have pointed out; downgrading to lvm2-2.02.150-1 immediately solves the problem. I have 12 volume groups each comprising one pv, and there is a scan failure for every one of them:
UNIT LOAD ACTIVE SUB DESCRIPTION
● lvm2-pvscan@259:11.service loaded failed failed LVM2 PV scan on device 259:11
● lvm2-pvscan@259:14.service loaded failed failed LVM2 PV scan on device 259:14
● lvm2-pvscan@259:17.service loaded failed failed LVM2 PV scan on device 259:17
● lvm2-pvscan@259:2.service loaded failed failed LVM2 PV scan on device 259:2
● lvm2-pvscan@259:20.service loaded failed failed LVM2 PV scan on device 259:20
● lvm2-pvscan@259:5.service loaded failed failed LVM2 PV scan on device 259:5
● lvm2-pvscan@259:8.service loaded failed failed LVM2 PV scan on device 259:8
● lvm2-pvscan@8:12.service loaded failed failed LVM2 PV scan on device 8:12
● lvm2-pvscan@8:15.service loaded failed failed LVM2 PV scan on device 8:15
● lvm2-pvscan@8:3.service loaded failed failed LVM2 PV scan on device 8:3
● lvm2-pvscan@8:6.service loaded failed failed LVM2 PV scan on device 8:6
All of these exist, and as far as I can figure out there is nothing wrong with any of them.
This is the output from one failed scan's status:
May 31 16:55:07 tovilyis.excom.com systemd[1]: Starting LVM2 PV scan on device 8:6...
May 31 16:55:08 tovilyis.excom.com lvm[1013]: Concurrent lvmetad updates failed.
May 31 16:55:08 tovilyis.excom.com lvm[1013]: Failed to update cache.
May 31 16:55:08 tovilyis.excom.com systemd[1]: lvm2-pvscan@8:6.service: Main process exited, code=exited, status=5/NOTINSTALLED
May 31 16:55:08 tovilyis.excom.com systemd[1]: Failed to start LVM2 PV scan on device 8:6.
May 31 16:55:08 tovilyis.excom.com systemd[1]: lvm2-pvscan@8:6.service: Unit entered failed state.
May 31 16:55:08 tovilyis.excom.com systemd[1]: lvm2-pvscan@8:6.service: Failed with result 'exit-code'.
Again I want to stress that there is nothing wrong with any of those; the problem disappears with a downgrade. While this may be a temporary fix, it is still an annoyance to have to skip lvm2 for each and every system upgrade.
Offline
Unfortunately lvm2-2.02.154-3 does not appear to solve the problem. It started with lvm2-2.02.153-1 as previous posters have pointed out; downgrading to lvm2-2.02.150-1 immediately solves the problem. I have 12 volume groups each comprising one pv, and there is a scan failure for every one of them:
I would recommend requesting a reopening of 49483 if you think it is that issue or file a new issue on the bugtracker against either lvm2 or systemd depending on where you think the issue lies so the relevant maintainers are made aware an issue still exists for you.
Offline
I would guess the problem to be with lvm2, as the recent updates relate to this component, while systemd has been stable at 229-3 since February 18. It's possible of course that lvm2 > 150 has an issue with systemd-229-3.
To which section of the forum do you think this issue belongs? I know people can get sensitive when posts are improperly categorised...
Offline
polaris6262 wrote:Unfortunately lvm2-2.02.154-3 does not appear to solve the problem. It started with lvm2-2.02.153-1 as previous posters have pointed out; downgrading to lvm2-2.02.150-1 immediately solves the problem. I have 12 volume groups each comprising one pv, and there is a scan failure for every one of them:
I would recommend requesting a reopening of 49483 if you think it is that issue or file a new issue on the bugtracker against either lvm2 or systemd depending on where you think the issue lies so the relevant maintainers are made aware an issue still exists for you.
To which section of the forum do you think this issue belongs? I know people can get sensitive when posts are improperly categorised...
As I indicated I would use the bugtracker rather than the forums for this to ensure it is seen by the package maintainer.
Offline
Done. It's Bug Report 49530 "lvm2-pvscan errors during system boot".
Offline
Marked as solved s we have a workaround and a bug reported.
Thanks.
Last edited by jjb2016 (2016-06-01 10:20:20)
Offline
@polaris6262 thank you for submitting the bug report.
@jjb2016 and mrxx did lvm2-2.02.154-3 fix the issue for you?
Offline
With 154-3, the system did not stop at boot anymore asking for maintenance.
All volumes were mounted correctly, even though the PV scans still displayed [Failed] messages upon booting.
Error and status messages were similar to those listed by polaris6262, the most significant being
Concurrent lvmetad updates failed.
After turning off lvmetad, the boot-time PV scans report success.
That fixed it for me.
Offline
Please, you should vote for this bug: https://bugs.archlinux.org/task/49530
Offline