You are not logged in.

#1 2019-06-23 11:36:45

C-Ren
Member
Registered: 2019-03-21
Posts: 19

[SOLVED] Root volume on RAID 1 / LVM not found during boot

I'm installing Arch Linux using LVM. About my third or fourth time installing Arch, but my first time trying out LVM. Here's my disk layout:

I have two physical storage media, /dev/sda (an SSD, capacity about 100GB) and /dev/sdb (an HDD, capacity about 1TB).

Using LVM, I created a volume group volumegroup0, which contains:

* a system volume mirrored across the SSD and the HDD using raid1, which takes up most of the SSD's capacity (volumegroup0/systemvolume_raid1), with ext4 filesystem (which contains a swapfile of 16GB)
* a boot volume on the SSD about 500MB (volumegroup0/boot), with btrfs filesystem
* a home volume on the HDD, which has a 10GB cachepool on the SSD for faster access (volumegroup0/home), with btrfs filesystem
* a var volume on the HDD (volumegroup0/var), which has a 2GB cachepool on the SSD, like the home volume, with ext4 filesystem
* a snapshot volume for making backups of the systemvolume, called volumegroup0/systemvolume_snapshot.

Now, the system could boot last night, and I thought that I had successfully completed installation. But on booting this morning, the system fails to mount the root volume, and drops me into an emergency shell.

Booting with the default GRUB parameter (I'm using GRUB), "quiet", off, this is what the screen looks like after failure to boot:

:: running early hook [udev]
Starting version 242.29-2-arch
:: running early hook [lmv2]
:: running hook [udev]
:: Triggering uevents...

This stays on the screen for what feels like 30 seconds or so before the following appears:

Waiting 10 seconds for device /dev/mapper/volumegroup0-systemvolume_raid1 ...

Then, after 10 seconds:

ERROR: device '/dev/mappervolumegroup0-systemvolume_raid1' not found. Skipping fsck.
:: mounting '/dev/mappervolumegroup0-systemvolume_raid1' on real root
mount: /new_root: no filesystem type specified.
You are now being dropped into an emergency shell.
sh: can't access tty; job control turned off.
[rootfs ]#

While looking for a solution to the issue by booting from an archiso live USB, I've noticed something odd. Directly after booting, if I

ls /dev/mapper

or

ls /dev/volumegroup0

, the output contains every logical volume that I would expect -- except for systemvolume_raid1. Even volumegroup0-systemvolume_raid1_rimage0 and 1 and volumegroup0-systemvolume_raid1_rmeta0 and 1 are present in /dev/mapper, but the root volume proper is absent. But if I run

vgscan

(with OR without --mknodes then list either of those directories again, systemvolume_raid1 is present and correct as if nothing's wrong. And I can mount it without any problems -- which I have done, in order to try regenerating the initramfs after checking that the necessary modules and hooks were present in mkinitcpio.conf (they were) and regenerating /etc/fstab (using genfstab -U) without the swapfile (since regenerating /etc/fstab to include a line for automatically activating swap was the last thing I did).

Here's the contents of /etc/fstab, with the UUIDs omitted for brevity:

#/dev/mapper/volumegroup0-systemvolume_raid1
UUID=...    /    ext4    rw,relatime    0    1

#/dev/mapper/volumegroup0-var
UUID=...    /var    ext4    rw,relatime,stripe=16    0    2

#/dev/mapper/volumegroup0-home
UUID=...    /home    btrfs    rw,relatime,ssd,space_cache,subvolid=5,subvol=/    0    0

#/dev/mapper/volumegroup0-boot
UUID=...    /boot    btrfs    rw,relatime,ssd,space_cache,subvolid=5,subvol=/    0    0

UPDATE:

If I run

lvs -a

when systemvolume_raid1 is not visible, I get a lot of lines of the following output above the table containing the proper output of lvs -a:

Expected raid segment type but got NULL instead.

Notes:

1. The LVM article on the Arch Wiki says that you should ensure the kernel parameter "root" points to the mapped device, "e.g. /dev/[italic]vg-name[/italic]/[italic]lv-name[/italic]". In my case, the parameter points to /dev/mapper/volumegroup0-systemvolume_raid1, but changing it to a form as recommended by the wiki doesn't change anything except the device name in the error messages at boot (it becomes the new value of "root", as one might expect).

2. I'm using GRUB, generating configuration with grub-mkconfig, if that's relevant.

3. My problem appears to be similar to this unanswered problem on Unix & Linux Stack Exchange.

Last edited by C-Ren (2019-08-04 15:37:39)

Offline

#2 2019-06-26 14:56:27

C-Ren
Member
Registered: 2019-03-21
Posts: 19

Re: [SOLVED] Root volume on RAID 1 / LVM not found during boot

I wiped the partition tables of both disks and started again from scratch, except this time I didn't create a swapfile on the root filesystem, and everything seems to be working so far, so I must assume that the swapfile caused the issue.

Offline

#3 2019-07-31 20:19:57

C-Ren
Member
Registered: 2019-03-21
Posts: 19

Re: [SOLVED] Root volume on RAID 1 / LVM not found during boot

Offline

#4 2019-07-31 20:42:49

jasonwryan
Anarchist
From: .nz
Registered: 2009-05-09
Posts: 30,424
Website

Re: [SOLVED] Root volume on RAID 1 / LVM not found during boot

C-Ren wrote:

Don't do this. If you want help here, then don't expect the people you are asking for help to go off somewhere else for information.


Moving to NC...


Arch + dwm   •   Mercurial repos  •   Surfraw

Registered Linux User #482438

Offline

#5 2019-08-04 15:34:01

C-Ren
Member
Registered: 2019-03-21
Posts: 19

Re: [SOLVED] Root volume on RAID 1 / LVM not found during boot

Alright, I'm sorry. I won't do that again.

I managed to get it to work again after running a bunch of

lvchange

commands and trying to boot the system with

systemd-nspawn

I'm not sure what actually solved the problem, but I think it was the command

lvchange --syncaction repair <LV_raid>

Last edited by C-Ren (2019-08-04 15:37:03)

Offline

Board footer

Powered by FluxBB