You are not logged in.
Hello, fellow Arch users!
I have Lenovo Legion 5 pro 16arh7h with Arch Linux and Windows dual boot which worked fine until yesterday. I updated some packages (via `pacman -Syu`), rebooted to Windows, updated Windows and then couldn't boot again from Arch Linux because of this problem. Secure boot, fast boot in Windows settings and instant boot in BIOS are disabled.
The error:
VFS: Cannot open root device "UUID=bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17" or unknown-block(0,0): error -19
Please append a correct "root=" boot option; here are the available partitions:
Kernel panic not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
Part of the boot log until error: https://pasteboard.co/q3pxz1yAWPVe.jpg
lshw, grub.cfg, blkid, lsblk-f, lsblk, lsinitcpio of initramfs image, mkinitcpio logs, /etc/fstab, list of upgraded packages.
The problem appeared both on linux and linux-lts and both on regular initramfs and fallback images. I tried to downgrade back to previous versions of linux, linux-lts, linux-firmware and linux-firmware-whence packages (other packages seem irrelevant), uninstalled the latest installed windows updates (at least the updates that Windows allowed me to uninstall), it didn't help.
As far as I understand the problem is that during init Linux doesn't recognize the SSD. I've discovered that nvme module was not present in initramfs image with "default" /etc/mkinitcpio.conf (MODULES were empty and block hook was after autodetect hook), so I added nvme (and ext4) modules manually. After that the nvme module appeared in the image (initcpio logs attached are with manually added nvme module), but the problem remained. Also I've moved block hook before autodetect hook, it didn't help either.
Live USB with Arch 6.2.1 works fine. I really don't know what to try next, I would appreciate any help.
Offline
#> less /proc/cmdline
check there if UUID of device was stated in error is your root partition and mentioned here.
Last edited by ranurag (2023-08-06 02:59:55)
Offline
From the live USB, do NOT chroot into the system and post the output of "lsblk -f", you can
lsblk -f | curl -F 'f:1=<-' ix.io
to paste it.
Offline
#> less /proc/cmdline
check there if UUID of device was stated in error is your root partition and mentioned here.
I have access to the shell only using Live USB, its `/proc/cmdline` shows USB stick as root, so I'm not sure what I should check. However, the root of my system is on `/dev/nvme0n1p5` which has the same UUID as stated in the error. Also I tried to specify root as `root=/dev/nvme0n1p5`, but it didn't help. In the error it says: "here are the available partitions: " and the error code is -19, -ENODEV, I guess this means that the problem is not that UUID is wrong but that Linux doesn't detect any partitions at all?
From the live USB, do NOT chroot into the system and post the output of "lsblk -f", you can
lsblk -f | curl -F 'f:1=<-' ix.io
to paste it.
Here it is: http://ix.io/4CCM
Last edited by TomMorfin (2023-08-06 10:55:29)
Offline
bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 is there and looks like the linux root drive.
Because of the windows partition, see the 3rd link below. Mandatory.
Disable it (it's NOT the BIOS setting!) and reboot windows and linux (if possible) twice for voodo reasons.
Other than that, can you boot the fallback initramfs? (Though chances are low given that you've explicitly added nvme to the initramfs already)
Offline
bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 is there and looks like the linux root drive.
Because of the windows partition, see the 3rd link below. Mandatory.
Disable it (it's NOT the BIOS setting!) and reboot windows and linux (if possible) twice for voodo reasons.Other than that, can you boot the fallback initramfs? (Though chances are low given that you've explicitly added nvme to the initramfs already)
Fast startup and hibernation are disabled in Windows. No, with fallback initramfs problem remains the same.
Last edited by TomMorfin (2023-08-06 11:07:52)
Offline
What happens if you add "rootwait" (waits forever for the root device) or "rootdelay=60" (waits for 60 seconds before looking at the root device, so you *have* to expect and wait for the timeout)
Offline
What happens if you add "rootwait" (waits forever for the root device) or "rootdelay=60" (waits for 60 seconds before looking at the root device, so you *have* to expect and wait for the timeout)
Adding `rootdelay=60` indeed causes Linux to wait before mounting root device for 60 seconds and then it fails with the same error. Added `rootwait`, so far it waits for ~15 minutes and nothing happens.
Last edited by TomMorfin (2023-08-06 11:38:20)
Offline
Try "nvme_core.default_ps_max_latency_us=0 iommu=soft", https://wiki.archlinux.org/title/Solid_ … leshooting
Offline
Try "nvme_core.default_ps_max_latency_us=0 iommu=soft", https://wiki.archlinux.org/title/Solid_ … leshooting
Didn't help, the same error remains
Offline
So the root account is unlocked in the initrd:
# cp /usr/lib/initcpio/install/systemd /etc/initcpio/install/
Then change line 144 of the newly created /etc/initcpio/install/systemd from
echo 'root:*:::::::' >"$BUILDROOT/etc/shadow"
to
echo 'root::::::::' >"$BUILDROOT/etc/shadow"
Rebuild the initrd.
Offline
So the root account is unlocked in the initrd:
# cp /usr/lib/initcpio/install/systemd /etc/initcpio/install/
Then change line 144 of the newly created /etc/initcpio/install/systemd from
echo 'root:*:::::::' >"$BUILDROOT/etc/shadow"
to
echo 'root::::::::' >"$BUILDROOT/etc/shadow"
Rebuild the initrd.
I mounted the root partition, chrooted into it and rebuilt initrd after executing the commands you provided, mkinitcpio logs. Nothing changed, the error remains the same.
Last edited by TomMorfin (2023-08-06 13:06:20)
Offline
I think loqs' idea was rather directed towards https://wiki.archlinux.org/title/Genera … ery_shells because of https://bugs.archlinux.org/task/70408
Offline
I think loqs' idea was rather directed towards https://wiki.archlinux.org/title/Genera … ery_shells because of https://bugs.archlinux.org/task/70408
Oh, okay. So these commands loqs provided should unlock root account to make shell accessible before root device is mounted? How do I do this? Adding `init=/bin/sh` to boot params didn't work
Offline
"systemd.unit=emergency.target" (only "emergency" should™ work as well)
Offline
"systemd.unit=emergency.target" (only "emergency" should™ work as well)
None of these two parameters worked, it still tries to mount root and fails panicking. Maybe I should add or remove some mkinitcpio hooks?
UPD: removing systemd from hooks (mkinitcpio.conf) and specifying break=premount parameter doesn't work either.
Last edited by TomMorfin (2023-08-06 14:10:21)
Offline
From grub edit the boot options and check the initrd option is present. If it is and you delete it is the same result produced? If you delete the root option is the same result produced?
Last edited by loqs (2023-08-06 14:26:43)
Offline
The following is a corresponding part of my current grub.cfg:
linux /vmlinuz-linux root=UUID=bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
initrd /amd-ucode.img /initramfs-linux.img
The following edits produce the same error as in OP post:
linux /vmlinuz-linux root=UUID=bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
linux /vmlinuz-linux initrd=/boot/initramfs-linux.img root=UUID=bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
initrd /amd-ucode.img /initramfs-linux.img
The following
linux /vmlinuz-linux initrd=/boot/initramfs-linux.img root=UUID=bdd26580-5b4b-4ac1-b8d8-fe3ffd763b17 rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
linux /vmlinuz-linux initrd=/boot/initramfs-linux.img rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
produce
Booting a command list
Loading Linux linux ... Loading initial ramdisk
error: start image() returned 0x8000000000000002.
Press any key to continue...
Removing root option
linux /vmlinuz-linux rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
initrd /amd-ucode.img /initramfs-linux.img
linux /vmlinuz-linux init=/bin/sh rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
initrd /amd-ucode.img /initramfs-linux.img
linux /vmlinuz-linux break=premount rw rootfstype=ext4 loglevel=8 nvidia_drm.modeset=1
echo 'Loading initial ramdisk ...'
initrd /amd-ucode.img /initramfs-linux.img
produces the same error as in OP post with UUID=... replaced with (null):
VFS: Cannot open root device "(null)" or unknown-block(0,0): error -19
All this was tested without systemd hook in `mkinitcpio.conf`. UPD: With the systemd hook the behavior remains the same.
Last edited by TomMorfin (2023-08-06 15:21:09)
Offline
The error is kinda expectable, but you'd want to be dropped into an emergency shell.
On a guess, remove "rootfstype=ext4" as well.
Offline
The error is kinda expectable, but you'd want to be dropped into an emergency shell.
On a guess, remove "rootfstype=ext4" as well.
Didn't help. Error message changed to
No filesystem could mount root, tried:
Kernel panic - ...
Offline
Try to remove /amd-ucode.img …
Offline
Try to remove /amd-ucode.img …
Didn't try, gave up on this
Reinstalled the system and now everything works fine, thank you all for your time. Not sure if I should mark the thread solved, since it wasn't technically..
Last edited by TomMorfin (2023-08-07 00:15:19)
Offline
If you prefectly re-created the previous system there might actually have been some on-disk file corruption (kernel image, modules, systemd) that caused this.
I'd keep an eye on nvme errors, occasionally "pacman -Qikk" and in doubt disable APST, you don't want that to happen to any private data.
Offline