You are not logged in.

#1 2014-11-14 09:29:41

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

LVM raid boot issues

EDIT: Issue appears to be not at all what I thought it was.  Still having some trouble; see most recent post for update on actual issue.  Title has been changed accordingly.

EDIT: After having done a bit more digging and troubleshooting, I'm now fairly certain that my issue is due to my BTRFS volumes not mounting during boot for some reason.  See the second post below for more info...

I'm trying to set up my first full Arch install and I'm having some issues that I think are due to LVM configuration BTRFS.  I had posted previously here thinking that it was an issue with bootstrap or EFI, but I'm now fairly confident that LVM BTRFS is the issue.

For storage I have three 512gb SSDs, all set up the same with GPT partition table.  I currently have Kubuntu installed and I'd like to dual boot, at least initially. 

All 3 of my SSDs are set up the same and configured like this:

  # lsblk
NAME                       MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                          8:0    0  477G  0 disk 
├─sda1                       8:1    0    1M  0 part [GRUB]
├─sda2                       8:2    0  224G  0 part 
│ ├─coreVG-swapLV          252:0    0    3G  0 lvm  [SWAP]
│ ├─coreVG-archLV_rmeta_0  252:1    0    4M  0 lvm  
│ │ └─coreVG-archLV        252:7    0   30G  0 lvm  / - Arch root
│ ├─coreVG-archLV_rimage_0 252:2    0   15G  0 lvm  
│ │ └─coreVG-archLV        252:7    0   30G  0 lvm  / - Arch root
│ ├─coreVG-kubuLV_rmeta_0  252:8    0    4M  0 lvm  
│ │ └─coreVG-kubuLV        252:14   0   30G  0 lvm    - Kubuntu root
│ ├─coreVG-kubuLV_rimage_0 252:9    0   15G  0 lvm  
│ │ └─coreVG-kubuLV        252:14   0   30G  0 lvm     - Kubuntu root
│ ├─coreVG-bootLV_rmeta_0  252:15   0    4M  0 lvm  
│ │ └─coreVG-bootLV        252:21   0    1G  0 lvm     - Kubuntu boot
│ ├─coreVG-bootLV_rimage_0 252:16   0  512M  0 lvm  
│ │ └─coreVG-bootLV        252:21   0    1G  0 lvm     - Kubuntu boot
│ └─coreVG-archbootLV      252:22   0    1G  0 lvm  /boot - Arch boot
└─sda3                       8:3    0  252G  0 part /home

Now I've tried to do the Arch install without the dual boot multiple times now and I repeatedly get the same issue: 

ERROR: Unable to find root device '/dev/mapper/coreVG-archLV'

After those failed attempts, I decided to reinstall Kubuntu and do a dual boot so that I'd have have something working while I tried to figure out this LVM thing. 

What I currently have going on is the above dual-boot configuration.  Here's what I've done so far:

Now, after rebooting I'm still getting the same error message as before, this time even with the Kubuntu install.

So, what am I missing here?  As near as I can tell, I've followed all the install instructions properly, but something with the Arch install is messing up the grub even for my Kubuntu install.  I was only able to get booted back up by restoring my previous grub.cfg file.

Last edited by cypher_zero (2015-02-09 22:14:11)

Offline

#2 2014-11-17 19:54:38

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

So I'm now thinking that the issue is due to my BTRFS partitions not mounting properly during boot.  I'm able to get GRUB2 to now 'see' my Arch install and I've manually rechecked that and all the settings appear to be good (see below).  What's happening on boot is that I can select Arch and it will attempt to boot, but then I'll be dropped into recovery shell.  I am getting a couple of errors related to: "Failed to open /dev/btrfs-control"

Within the recovery shell, I can cd into my /dev folder, and while it shows my archbootLV (/boot), it's not showing my archLV (root)

Here's what the relevant portion of my grub.cfg looks like:

insmod part_gpt
insmod part_gpt
insmod part_gpt
insmod lvm
insmod ext2
insmod btrfs
set root='lvmid/lLWG0Q-sALm-n1OG-Xigs-3WP5-034H-0D3HoR/UulbUO-eCet-h1x2-evrm-Np9T-RGMC-1yBxC2'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint='lvmid/lLWG0Q-sALm-n1OG-Xigs-3WP5-034H-0D3HoR/UulbUO-eCet-h1x2-evrm-Np9T-RGMC-1yBxC2'  40a96b1f-b98a-42f7-973c-823e76243282
else
  search --no-floppy --fs-uuid --set=root 40a96b1f-b98a-42f7-973c-823e76243282
fi
linux /vmlinuz-linux root=/dev/mapper/coreVG-archLV ro rootflags=subvol=@archroot
initrd /initramfs-linux.img

I've been searching all over and I've tried most eveything I can think of.  I've read through the BTRFS wiki multiple times now and I'm not sure what I've done incorrectly to get things working.  Any help would be appreciated.

Offline

#3 2014-11-17 19:58:53

cedeel
Member
From: ~
Registered: 2009-08-25
Posts: 176
Website

Re: LVM raid boot issues

Hello, what does your /etc/mkinitcpio.conf look like?

Offline

#4 2014-11-17 20:03:00

WorMzy
Forum Moderator
From: Scotland
Registered: 2010-06-16
Posts: 11,896
Website

Re: LVM raid boot issues

Try adding btrfs to your MODULES array in mkinitcpio.conf, and regenerating your initramfs. There's been a few reports of btrfs-RAID partitions not being mounted during boot, and it seems there's a bug where the btrfs module isn't inserted into the kernel in a timely fashion. This bug may be biting you as well.


Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD

Making lemonade from lemons since 2015.

Offline

#5 2014-11-17 20:03:57

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

Here's my /etc/mkintcpio.conf:

# vim:set ft=sh
# MODULES
# The following modules are loaded before any boot hooks are
# run.  Advanced users may wish to specify all system modules
# in this array.  For instance:
#     MODULES="piix ide_disk reiserfs"
MODULES="crc32c"

# BINARIES
# This setting includes any additional binaries a given user may
# wish into the CPIO image.  This is run last, so it may be used to
# override the actual binaries included by a given hook
# BINARIES are dependency parsed, so you may safely ignore libraries
BINARIES=""

# FILES
# This setting is similar to BINARIES above, however, files are added
# as-is and are not parsed in any way.  This is useful for config files.
FILES=""

# HOOKS
# This is the most important setting in this file.  The HOOKS control the
# modules and scripts added to the image, and what happens at boot time.
# Order is important, and it is recommended that you do not change the
# order in which HOOKS are added.  Run 'mkinitcpio -H <hook name>' for
# help on a given hook.
# 'base' is _required_ unless you know precisely what you are doing.
# 'udev' is _required_ in order to automatically load modules
# 'filesystems' is _required_ unless you specify your fs modules in MODULES
# Examples:
##   This setup specifies all modules in the MODULES setting above.
##   No raid, lvm2, or encrypted root is needed.
#    HOOKS="base"
#
##   This setup will autodetect all modules for your system and should
##   work as a sane default
#    HOOKS="base udev autodetect block filesystems"
#
##   This setup will generate a 'full' image which supports most systems.
##   No autodetection is done.
#    HOOKS="base udev block filesystems"
#
##   This setup assembles a pata mdadm array with an encrypted root FS.
##   Note: See 'mkinitcpio -H mdadm' for more information on raid devices.
#    HOOKS="base udev block mdadm encrypt filesystems"
#
##   This setup loads an lvm2 volume group on a usb device.
#    HOOKS="base udev block lvm2 filesystems"
#
##   NOTE: If you have /usr on a separate partition, you MUST include the
#    usr, fsck and shutdown hooks.
HOOKS="base udev autodetect modconf block lvm2 btrfs filesystems keyboard fsck"

# COMPRESSION
# Use this to compress the initramfs image. By default, gzip compression
# is used. Use 'cat' to create an uncompressed image.
#COMPRESSION="gzip"
#COMPRESSION="bzip2"
#COMPRESSION="lzma"
#COMPRESSION="xz"
#COMPRESSION="lzop"
#COMPRESSION="lz4"

# COMPRESSION_OPTIONS
# Additional options for the compressor
#COMPRESSION_OPTIONS=""

Thanks for your help!

Offline

#6 2014-11-17 20:05:04

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

WorMzy wrote:

Try adding btrfs to your MODULES array in mkinitcpio.conf, and regenerating your initramfs. There's been a few reports of btrfs-RAID partitions not being mounted during boot, and it seems there's a bug where the btrfs module isn't inserted into the kernel in a timely fashion. This bug may be biting you as well.

Thanks!  Giving that a shot right now.

Updated /ect/mkinitcpio.conf:

MODULES="crc32c btrfs"

BINARIES=""

FILES=""

HOOKS="base udev autodetect modconf block lvm2 btrfs filesystems keyboard fsck"

Last edited by cypher_zero (2014-11-17 20:09:41)

Offline

#7 2014-11-18 01:04:50

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

Still no luck.  Getting the same errors as before:

hwdb.bin does not exist, please run udevadm hwdb --update

Setup ERROR: setup context command for slot 3

Error device '/dev/mapper/coreVG-archLV' not found. Skipping fsck

Error unable to find root device '/dev/mapper/coreVG-archLV'

Any other ideas?  I'm at my wit's end here as to what is going on with this.

Offline

#8 2014-11-20 00:48:19

dchusovitin
Member
Registered: 2013-07-16
Posts: 1

Re: LVM raid boot issues

I've same issue, systemd.mount can't mount btrfs partition on my second disk (by first ok).
Run "udevadm hwdb --update" and reboot, it helped me

Offline

#9 2014-11-20 01:43:13

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

I tried doing that while chroot'd in and it had no effect. Were you able to run the command from recovery shell?

Offline

#10 2014-11-22 11:57:46

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

So I tried running "udevadm hwdb --update" again to see if that fixed things.  The error didn't go away initially, but then I added "/etc/udev/hwdb.bin" to my mkinitcpio.conf and ran "mkinticpio -p linux".  Now the error is gone, but I still am getting no further in being able to boot. 

My mkintcpio.conf:

MODULES="crc32c btrfs"

BINARIES=""

FILES="/etc/udev/hwdb.bin"

HOOKS="base udev autodetect modconf block lvm2 btrfs filesystems keyboard fsck"

Error messages I'm still getting:

Setup ERROR: setup context command for slot 3

Error device '/dev/mapper/coreVG-archLV' not found. Skipping fsck

Error unable to find root device '/dev/mapper/coreVG-archLV'

Offline

#11 2014-11-22 12:41:59

TheSaint
Member
From: my computer
Registered: 2007-08-19
Posts: 1,523

Re: LVM raid boot issues

Did you remake your grub.cfg?
Maybe better to keep the Kubuntu grub and add Arch menu entry.

Last edited by TheSaint (2014-11-22 12:42:27)


do it good first, it will be faster than do it twice the saint wink

Offline

#12 2014-11-22 12:42:56

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

TheSaint wrote:

Did you remake your grub.cfg?
Maybe better to keep the Kubuntu grub and add Arch menu entry.

Yeah, that's what I did.  I've tried it both ways; same issue either way I do it.

Last edited by cypher_zero (2014-11-22 12:43:39)

Offline

#13 2014-11-22 12:51:17

TheSaint
Member
From: my computer
Registered: 2007-08-19
Posts: 1,523

Re: LVM raid boot issues

Then, you should try to start Kubuntu live and chroot in Kubuntu, maybe.


do it good first, it will be faster than do it twice the saint wink

Offline

#14 2014-11-22 12:53:37

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

I've done that too.  I have no issues chroot'ing in either from a livecd or my regular kubuntu install and I can run programs and everything from within the chroot.  The issue is that I can't get the root partition to mount at boot when I'm trying to boot Arch.

Offline

#15 2014-11-22 12:55:52

TheSaint
Member
From: my computer
Registered: 2007-08-19
Posts: 1,523

Re: LVM raid boot issues

So in chroot can't you find a fix for you grub ?
It this topic of any help ?


do it good first, it will be faster than do it twice the saint wink

Offline

#16 2014-11-22 13:18:14

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

TheSaint wrote:

So in chroot can't you find a fix for you grub ?
It this topic of any help ?

Grub does not appear to be the issue. 

The post you linked though looks like it's the same issue I'm having, but I don't understand exactly what yafeng did to get his system to boot.

I'm going to try this:

WorMzy wrote:

Try this: remove the btrfs hook, add btrfs to your modules array in mkinitcpio.conf, then rebuild your initramfs and see if you can boot.

Offline

#17 2014-11-22 14:12:49

TheSaint
Member
From: my computer
Registered: 2007-08-19
Posts: 1,523

Re: LVM raid boot issues

Well, if Kubuntu was in proper order, at least one grub would work, correct me if I misunderstood.
Also this topic is on your similar situation.
I said so, because it seems to me that grub is looking for a wrong partitions or wrong mapped device. If the automatism know to overcome this point, then your done wink.
In the other hand, it should be a module issue which doesn't recognize the volumes layout.

Last edited by TheSaint (2014-11-22 14:16:32)


do it good first, it will be faster than do it twice the saint wink

Offline

#18 2015-02-09 22:12:06

cypher_zero
Member
Registered: 2014-10-23
Posts: 50

Re: LVM raid boot issues

Well, I've finally figured out the problem after a lot of reading.

First,  bit of explanation:  I had set up my root and home partitions to use LVM raid5 like so:

 lvcreate --type raid5 -i 2 -n archLV -L 40GB vg1

It turns out that this is not well supported in Arch, but causes no issues in other distros that I've used (mainly Ubuntu-derived). 

I eventually figured out to add the "dm-raid" module to my /etc/mkinitcpio.conf and am now 1 step closer to having an actual working boot.  However, now whenever I boot, I'm dropped into recovery shell.  If I run "lvm vgscan && lvm vgchange -ay" and "exit" I can boot normally. 

So, my question is, what do I have to do to get to where I don't have to run "lvm vgscan && lvm vgchange -ay" to boot?

EDIT:
Relevant portions of my /etc/mkinitcpio.conf :

MODULES="dm_mod dm-raid"

HOOKS="base udev autodetect modconf block lvm2 filesystems keyboard fsck"

Last edited by cypher_zero (2015-02-09 22:17:20)

Offline

Board footer

Powered by FluxBB