You are not logged in.
After updating kernel to 6.11.* system does not boot. If I downgrade to 6.10.10, it works again.
Error happens at systemd load screen, when I get
[FAILED] Failed to mount /efi
Recently I switched from GRUB to systemd-boot and I have a feeling that I might have messed something up.
After entering filesystem via usb drive and mounting root and boot partitions (root is on lvm):
mount /dev/vg0/lvroot /mnt
mount /dev/nvme0n1p1 /mnt/efi
arch-chroot /mnt
Contents of the /boot and /efi directories are following:
ls /boot
amd-ucode.img initramfs-linux-fallback.img initramfs-linux.img intel-ucode.img vmlinuz-linux
ls /efi
EFI amd-ucode.img initramfs-linux-fallback.img initramfs-linux.img intel-ucode.img loader vmlinuz-linux
I feel like when I install new kernel, it only updates /boot and does not touch /efi, so /efi always stays in the same condition that it was installed on, which results in mismatch between /efi version and kernel version, but even if that’s the case, idk how to fix it.
Also, I probably should note that I have 2 root partitions: lvroot and test-root. I’m currently trying to enter lvroot, but I installed systemd-boot on test-root.
GRUB was previously mounted to /boot.
Last edited by fib_nm (2024-11-11 17:40:16)
Offline
I feel like when I install new kernel, it only updates /boot and does not touch /efi, so /efi always stays in the same condition that it was installed on, which results in mismatch between /efi version and kernel version, but even if that’s the case, idk how to fix it.
Yeah I also think that is the case as this would explain the symptoms you are seeing. Could you verify this by posting the outputs of "ls -l /boot /efi"?
Online
If I understand correctly, you have two different root partitions, which I assume means you have two entries in your GRUB configuration so you can choose which root you want to use.
But, both of these root partitions are using the same boot partition using the same kernel name (vmlinuz-linux). That is going to cause problems when you update the kernel from one root and then try to boot the other.
What is in you /etc/fstab file? Is the /boot line the same in both roots
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
fib_nm wrote:I feel like when I install new kernel, it only updates /boot and does not touch /efi, so /efi always stays in the same condition that it was installed on, which results in mismatch between /efi version and kernel version, but even if that’s the case, idk how to fix it.
Yeah I also think that is the case as this would explain the symptoms you are seeing. Could you verify this by posting the outputs of "ls -l /boot /efi"?
ls -l /boot
-rwxr-xr-x 1 root root 81920 Oct 9 00:09 amd-ucode.img
-rw------- 1 root root 141375752 Oct 21 18:03 initramfs-linux-fallback.img
-rw------- 1 root root 62244729 Oct 21 18:03 initramfs-linux.img
-rwxr-xr-x 1 root root 8126464 Oct 9 00:09 intel-ucode.img
-rw------- 1 root root 13476352 Oct 21 18:03 vmlinuz-linux
ls -l /efi
drwxr-xr-x 5 root root 4096 Sep 27 13:32 EFI
-rwxr-xr-x 1 root root 81920 Sep 9 15:24 amd-ucode.img
-rwxr-xr-x 1 root root 131060405 Sep 27 11:49 initramfs-linux-fallback.img
-rwxr-xr-x 1 root root 61998382 Sep 27 11:48 initramfs-linux.img
-rwxr-xr-x 1 root root 8126464 Sep 10 21:18 intel-ucode.img
drwxr-xr-x 3 root root 4096 Oct 21 18:19 loader
-rwxr-xr-x 1 root root 13406720 Sep 27 11:42 vmlinuz-linux
Offline
If I understand correctly, you have two different root partitions, which I assume means you have two entries in your GRUB configuration so you can choose which root you want to use.
But, both of these root partitions are using the same boot partition using the same kernel name (vmlinuz-linux). That is going to cause problems when you update the kernel from one root and then try to boot the other.
What is in you /etc/fstab file? Is the /boot line the same in both roots
No, I don’t have GRUB. When I was installing second root, I installed systemd-boot and started using it.
As of right now I deleted all GRUB stuff, but it happened after my system started failing to boot, so I don’t think I messed something up by deleting it. I actually thought that deleting GRUB would fix the problem, but it didn’t.
Last edited by fib_nm (2024-10-21 16:58:24)
Offline
systemd-boot will have the same issue. Each of the two roots using the same kernel will cause problems. When using one of the roots, if one updates the kernel on /boot, you will break the other root and it will fail to find its modules the next time that other one is used.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
systemd-boot will have the same issue. Each of the two roots using the same kernel will cause problems. When using one of the roots, if one updates the kernel on /boot, you will break the other root and it will fail to find its modules the next time that other one is used.
After reading your message I updated the second root (test-root) and it helped. Does this mean that I'll always have to update the second root to use systemd-boot now? I rarely use test-root, so it would be more convenient if I could just update first root (lvroot). Is there a way to do this?
Last edited by fib_nm (2024-10-21 17:39:15)
Offline
No,, it can continue to use whatever bootloader you want. But, how do select which boot loader you want to use? With EFI configurations?
It sounds like lvroot is not properly mounting /boot. Please post the output of mount when booted using that root.
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
No,, it can continue to use whatever bootloader you want. But, how do select which boot loader you want to use? With EFI configurations?
It sounds like lvroot is not properly mounting /boot. Please post the output of mount when booted using that root.
If you meant systemctl status efi.mount, here it is:
[fib_nm@archlaptop ~]$ systemctl status efi.mount
● efi.mount - /efi
Loaded: loaded (/etc/fstab; generated)
Active: active (mounted) since Mon 2024-10-21 20:18:54 MSK; 2h 4min ago
Invocation: 5b6ada2bd0e3476fb15c620d8007f8d8
Where: /efi
What: /dev/nvme0n1p1
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Tasks: 0 (limit: 18852)
Memory: 28K (peak: 1.5M)
CPU: 7ms
CGroup: /system.slice/efi.mount
Oct 21 20:18:54 archlaptop systemd[1]: Mounting /efi...
Oct 21 20:18:54 archlaptop systemd[1]: Mounted /efi.
Also you asked about /etc/fstab:
[fib_nm@archlaptop ~]$ cat /etc/fstab
# Static information about the filesystems.
# See fstab(5) for details.
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/mapper/vg0-lvroot
UUID=8433740e-ecb0-4b9b-a3cd-03ba1b301d70 / ext4 rw,relatime 0 1
# /dev/nvme0n1p1
UUID=4F4A-C00A /efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/mapper/vg0-lvhome
UUID=b0d64b6c-f2c1-406b-b6b7-8f692e676e81 /home ext4 rw,relatime 0 2
# /dev/mapper/vg0-lvswap
UUID=a30345c9-0bd2-4259-ad40-e2669baac890 none swap defaults 0 0
Offline
Clean up the /boot path on the root partition and mount UUID=4F4A-C00A to /boot
Edit:
https://wiki.archlinux.org/title/EFI_sy … unt_points
Alternatively configure your bootloader to boot from the root partition
Last edited by seth (2024-10-21 20:14:28)
Offline
I changed the directory that mkinitcpio -P updates from /boot to /efi by changing all "boot" entries to "efi" in
/etc/mkinitcpio.d/linux.preset
. This basically fixes this problem for me.
Offline