You are not logged in.

#1 2012-09-05 00:20:53

Aethyr
Member
Registered: 2012-05-15
Posts: 2

Broken LVM setup due to recent systemd upgrade

So yeah, a couple of days ago I let the system upgrade and rebooted. My setup is basically two identical HDDs in a RAID 0 array + LVM. After the recent upgrade, boot hangs somewhere in the reading of the init ramdisk and tells me that it cannot find my volume groups. Any clues as to what I could do to fix it?

In the case that a fresh install is needed (which would quite annoy me) - any clues as to why the god-damned install media refuses to boot from a USB?

Offline

#2 2012-09-06 10:07:36

anon1054572
Member
Registered: 2012-04-14
Posts: 17

Re: Broken LVM setup due to recent systemd upgrade

Did you also upgrade to kernel 3.5.3 or any 3.5.x during that update? I had the same situation (swtich to systemd + kernel update breaking lvm) and I solved it by downgrading to linux 3.4.x. Systemd was not at fault in my case.

You can fix your boot by executing
>lvm vgchange --sysinit -a y
manually in the console you get dropped in. This should work, in my case it didn't and the system failed to boot correctly after some initial booting up.
Anyway it seems to be an issue with 3.5.3, I didn't look deeper into it and will be using 3.4.8 for some time.

Last edited by anon1054572 (2012-09-06 10:36:17)

Offline

#3 2012-09-06 18:25:13

Aethyr
Member
Registered: 2012-05-15
Posts: 2

Re: Broken LVM setup due to recent systemd upgrade

It seems to have worked at first glance. Thanks for the help! I was starting to despair that I might have to switch to Ubuntu until Arch gets systemd all sorted out tongue

Now on to fixing an xorg problem on my second Arch box. It seems the breakages never end tongue

Edit: Is there a way to make this happen automatically at boot time? I'm sure there must be one...

Last edited by Aethyr (2012-09-06 22:16:31)

Offline

#4 2012-09-07 09:19:36

anon1054572
Member
Registered: 2012-04-14
Posts: 17

Re: Broken LVM setup due to recent systemd upgrade

I haven't tested it but people claim that adding lvmwait to your bootloader command line and modifying the lvm2 hook (/usr/lib/initcpio/hooks/lvm2) to this

#!/usr/bin/ash

run_hook() {
    local pvdev

    modprobe -q dm-mod >/dev/null 2>&1

    # If the lvmwait= parameter has been specified on the command line
    # wait for the device(s) before trying to activate the volume group(s)
    for pvdev in ${lvmwait//,/ }; do
        poll_device ${pvdev} ${rootdelay}
    done

    msg "Activating logical volumes..."
    lvm pvscan
    [ -d /etc/lvm ] && lvm vgscan

    if [ -n "$quiet" ]; then
      lvm vgchange --sysinit -a y >/dev/null
    else
      lvm vgchange --sysinit -a y
    fi
}

Fixes the problem.


I've found the fix in this thread : https://bbs.archlinux.org/viewtopic.php?id=145714

It's not clear where the bug comes from, it's probably upstream but only Arch users have experienced it, so it seems that it was made apparent by the options used during compilation by Arch maintainers.

Personally I prefer a downgrade to such a work around. I'm tired of new kernels breaking my system. I had major issues after the upgrades to 3.3 and 3.4, and now this.

Last edited by anon1054572 (2012-09-07 09:22:19)

Offline

#5 2012-09-07 11:11:15

Ashren
Member
From: Denmark
Registered: 2007-06-13
Posts: 1,229
Website

Re: Broken LVM setup due to recent systemd upgrade

See:  https://bugs.archlinux.org/task/30966

and

https://mailman.archlinux.org/pipermail … 29123.html

Solution:

Append

lvmwait=/dev/sdXY

where sdXY corresponds to where you root lv resides.

Offline

Board footer

Powered by FluxBB