You are not logged in.
Hey Archers,
When I installed Arch i had quite a struggle cause I had to install in UEFI mode. I can boot now without any problems. However, the problem starts all over again when I update the "linux" package. Last time it totally messed up my /boot directory, I could only start the emergency console.
As desribed at https://wiki.archlinux.org/index.php/EFISTUB, you have to manually copy the image files to the according folder. I tried that, but it did not succeed. I tried every logical possibility, renamed files, created all kinds of folders in order to make it work. What I had to do was to insert the Arch CD and reinstall grub. As you can understand, this is not an acceptable solution.
There is already a newer version of "linux" available for me, but I'd like a solution to this problem in advance, after all that trouble I'm a bit hesitant to apply the upgrade.
This is how I installed the UEFI system:
# mkdir /mnt/boot
# mount /dev/sda1 /mnt/boot
# arch-chroot /mnt
# pacman -S linux #for reinstalling kernel and initramfs on the newly built boot partitoin
# grub-install --target=x86_64-efi --efi-directory=/boot --boot-directory=/boot/EFI --recheck
# vi /etc/default grub #modifing following line: GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda4:root"
# mkinitcpio -p linux
# grub-mkconfig -o /boot/EFI/grub/grub.cfg
below you can see how my /boot partition looks now (everything works at the moment).
My question is: Do I have to copy the three files in /boot to /boot/efi/efi initra~x.img after updating "linux"? My tree looks totally different than anything I see in the wiki or anywhere else, yet still it works somehow.
$ tree /boot
/boot
|-- efi
| |-- efi
| | |-- arch
| | | `-- grubx64.efi
| | |-- boot
| | | `-- bootx64.efi
| | `-- grub
| | |-- ... <irrelevant files>
| |-- initra~1.img
| |-- initra~2.img
| `-- vmlinu~1
|-- EFI
| `-- grub
| `-- grub.cfg
|-- grub
| |-- ... <irrelevant files>
|-- initramfs-linux-fallback.img
|-- initramfs-linux.img
`-- vmlinuz-linux
Any tipps would be appreciated
Last edited by freijon (2014-05-15 20:39:36)
Offline
Wouldn't it be simpler to have /boot in the ESP?
This can be done post-install using a live distro to create an EFI System Partition and then remount /boot there...
Para todos todo, para nosotros nada
Offline
/boot is already in the ESP as far as I can tell . As you can see, I mounted /dev/sda1 to /boot whereas /dev/sda1 is my ESP partition.
Last edited by freijon (2014-04-15 04:57:53)
Offline
Short answer -- you have to copy the new vmlinuz.... and initram.... files to where they are being looked for. accounting to the tree you posted, they are in /boot/efi.
Longer answer: You need to create your mounts so that /boot points to the location that contains the init images. That way a Kernel update will put the files in the right place without you having to copy them. I'm no expert on EFI, but it does seem your setup is kinda of muddled. Where is grub looking for your initram... files when it boots? That's where the "updated" kernel files need to go.
Matt
"It is very difficult to educate the educated."
Offline
Short answer -- you have to copy the new vmlinuz.... and initram.... files to where they are being looked for. accounting to the tree you posted, they are in /boot/efi.
I gave it a try today and unfortunately it didn't work. Had to insert the live CD again and re-install grub, reinstall linux etc (see first post). I really have no idea what else I could try and its kinda annoying...
Offline
Odd. I am out of ideas then.
Matt
"It is very difficult to educate the educated."
Offline
That does appear to be a very convoluted set-up going on there.
Using rEFInd here and the setup is much simpler:
/boot
|-------initramfs-linux.img
|-------initramfs-linux-fallback.img
|-------vmlinuz-linux
|-------refind_linux.conf
|-------/efi (mountpoint for ESP)
|------/EFI
|------/refind
|--------refind_x64.efi
|--------remaining refind config files
Offline
What might solve my problem is to install an other boot loader like Gummiboot. Mybe its just Grub which has problems on my system. This is new territory though. I presume that I can install any new boot loader with the Arch CD when chrooted, right?
Offline
Today I tried installing gummiboot. Unfortunately, the system was left unbootable once again. Here is the file-structure in /boot with a fresh gummiboot installation according to the installation guide.
/boot
├── EFI
│ ├── Boot
│ │ └── BOOTX64.EFI
│ └── gummiboot
│ └── gummibootx64.efi
├── initramfs-linux-fallback.img
├── initramfs-linux.img
├── loader
│ ├── entries
│ │ └── arch-encrypted.conf
│ └── loader.conf
└── vmlinuz-linux
I made a new entry in /boot/loader/entries/
title Arch Linux (Encrypted)
linux /vmlinuz-linux
options initrd=/initramfs-linux.img
cryptdevice=UUID=fe88d73a-49d2-4896-a90e-ed0b500a6219:root root=UUID=5d310d04-700d-4854-b2c5-28fbe2af934f rw
#cryptdevice=/dev/sda4:root root=/dev/mapper/cryptroot rw
As you can see, I tried it with and without the UUID.
When I try to boot with the above configuration I keep getting this message:
ERROR: device '' not found. Skipping fsck.
ERROR: Unable to find root device ''.
You are being dropped to a recovery shell
Any ideas what went wrong? My guess is that my cryptdevice option is wrong, but I can't figure out what.
Offline
Why is there a line break after the initrd declaration?
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
I thought there has to be a linebreak cause the line was wrapped on my mobile when viewing the wiki. Is that causing the problem?
Offline
Probably. Anything after the initrd declaration isn't being passed to the kernel, which is why you're getting the
Unable to find root device ''
messages.
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
As long as the new vmlinuz and initramfs are overwriting the old ones you booted with, and the file names aren't changing, there's no reason you should have to reinstall GRUB. GRUB must be changing permissions or doing something during the install process. I don't think this is an issue with your directory structure. Are you using an mkinitcpio hook to copy the files over or doing it manually? I just have a simple syslinux-efi setup and script to copy the files over to the ESP any time a kernel upgrade occurs. This works fine for me through upgrades, even with 3 different kernels installed. The script I got from https://wiki.archlinux.org/index.php/EF … tcpio_hook. Interestingly, I've never actually been able to get my system booting into efi mode without syslinux, and have always gotten the same issue you have with the recovery shell, even though technically all syslinux-efi is is a wrapper for efistub.
Offline
Probably. Anything after the initrd declaration isn't being passed to the kernel, which is why you're getting the
Unable to find root device ''
messages.
You were right, that was causing the problem. Now i can enter my password on boot but then I'm stuck in the shell again:
ERROR: device '/dev/mapper/cryptroot' not found
EDIT: Changing the mountpoint to "cryptroot" solved the above problem. Now I can boot using gummiboot (yaay). Now I'll wait for the next update of the linux package to see if it improves things.
Last edited by freijon (2014-05-01 19:49:14)
Offline
Today I had an upgrade of the "linux" package.
After upgrading the package I tried to reboot the system. It didn't work, I got stuck after decrypting the harddrive with a message that /boot/efi could not be mounted.
I booted with the CD once again but all looked fine. I then ran "pacman -S linux" and "mkinitcpio -p linux" and after that, I could boot again.
So my guess is, after an upgrade of the "linux" package you have to run "mkinitcpio -p linux"... ?
Offline
No, that is done automatically by the linux package's postinstall script.
What I suspect has happened is your kernel and modules became desynchronised. i.e. your kernel (in /boot) was 3.14.2, but your modules (in /usr) were for 3.14.3. This can happen if you don't have /boot mounted when you update the linux package.
When you were in the live environment, did you mount all your partitions before you reinstalled the linux package, or just some of them?
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
When you were in the live environment, did you mount all your partitions before you reinstalled the linux package, or just some of them?
I mounted the root partition at /mnt and the EFI boot partition at /mnt/boot and then chrooted into /mnt and ran the commands
Last edited by freijon (2014-05-11 20:47:21)
Offline
Is that how your system normally is? ESP to /boot? Or was your earlier post (#15) saying that the error was "/boot/efi" couldn't be mounted a mistake?
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
It seems like my system mounted the EFI partition to /boot/efi, but I mounted it to /boot when using the live CD... As far as I know, I followed the instructions in the wiki.
Last edited by freijon (2014-05-11 21:00:43)
Offline
This is where you have gone wrong then. Gummiboot is loading the kernel from $ESP/, but when you update the linux package, you're putting the new kernel into /boot, on the root partition.
Modify your fstab to mount your ESP to /boot instead of /boot/efi, and you shouldn't get this problem the next time you update the linux package.
Sakura:-
Mobo: MSI MAG X570S TORPEDO MAX // Processor: AMD Ryzen 9 5950X @4.9GHz // GFX: AMD Radeon RX 5700 XT // RAM: 32GB (4x 8GB) Corsair DDR4 (@ 3000MHz) // Storage: 1x 3TB HDD, 6x 1TB SSD, 2x 120GB SSD, 1x 275GB M2 SSD
Making lemonade from lemons since 2015.
Offline
That makes sense indeed. I changed my fstab to mount /boot instead of /boot/efi. I'm confident that the next upgrade will work fine. Thanks!
I'll set the topic to solved as soon as I am sure that everything works after the next upgrade of "linux" if thats fine with everyone.
Offline
hey everyone
once again, I couldn't boot my system after upgrading the linux package. Here are some additional information:
Firstly, here is my current fstab:
#
# /etc/fstab: static file system information
#
# <file system> <dir> <type> <options> <dump> <pass>
# UUID=5d310d04-700d-4854-b2c5-28fbe2af934f
/dev/mapper/cryptroot / ext4 rw,relatime,data=ordered 0 1
# UUID=3dbb0e3e-fa97-467c-80fd-996c6c553da3
/dev/sda3 none swap defaults 0 0
# UUID=7E1E-BDAE
/dev/sda1 /boot msdos rw,relatime,fmask=0022,dmask=0022,codepage=437,errors=remount-ro 0 2
After the boot failed, I ran:
$ systemctl status boot.mount
● boot.mount - /boot
Loaded: loaded (/etc/fstab)
Active: failed (Result: exit-code) since Die 2014-05-13 18:05:21 CEST; 1min 39s ago
Where: /boot
What: /dev/sda1
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Process: 413 ExecMount=/bin/mount /dev/sda1 /boot -t msdos -o rw,relatime,fmask=0022,dmask=0022,codepage=437,errors=remount-ro (code=exited, status=32)
Mai 13 18:05:21 arch mount[413]: mount: unknown filesystem type 'msdos'
Mai 13 18:05:21 arch systemd[1]: boot.mount mount process exited, code=exited status=32
Mai 13 18:05:21 arch systemd[1]: Failed to mount /boot.
Mai 13 18:05:21 arch systemd[1]: Unit boot.mount entered failed state.
It seems like there is something wrong with the msdos filesystem type. So I edited my fstab and deleted "msdos", cause when I mount in the live environment, I do not specify the filesystem.
After removing "msdos", I got the following error on boot:
$ systemctl status boot.mount
● boot.mount - /boot
Loaded: loaded (/etc/fstab)
Active: failed (Result: exit-code) since Die 2014-05-13 18:09:45 CEST; 41s ago
Where: /boot
What: /dev/sda1
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Process: 413 ExecMount=/bin/mount /dev/sda1 /boot -t rw,relatime,fmask=0022,dmask=0022,codepage=437,errors=remount-ro -o 0 (code=exited, status=32)
Mai 13 18:09:45 arch systemd[1]: boot.mount: Directory /boot to mount over is not empty, mounting anyway.
Mai 13 18:09:45 arch mount[413]: mount: wrong fs type, bad option, bad superblock on /dev/sda1,
Mai 13 18:09:45 arch mount[413]: missing codepage or helper program, or other error
Mai 13 18:09:45 arch mount[413]: In some cases useful info is found in syslog - try
Mai 13 18:09:45 arch mount[413]: dmesg | tail or so.
Mai 13 18:09:45 arch systemd[1]: boot.mount mount process exited, code=exited status=32
Mai 13 18:09:45 arch systemd[1]: Failed to mount /boot.
Mai 13 18:09:45 arch systemd[1]: Unit boot.mount entered failed state.
To make my system bootable again, I did the following:
1. Boot with live CD
2. Run the following commands:
cryptsetup open /dev/sda4 cryptroot
<password>
mount -t ext4 /dev/mapper/cryptroot /mnt
mount /dev/sda1 /mnt/boot
arch-chroot /mnt
pacman -S linux
mkinitcpio -p linux
exit
reboot
Any help would be appreciated as I can't figure out whats wrong this time...
Last edited by freijon (2014-05-13 16:27:16)
Offline
Your filesystem type in fstab for your EFI partition should be vfat, not msdos.
Offline
I don't think this will solve the problem, because it worked after the before mentioned steps. If that was the problem, there should be a general problem, not only after a linux upgrade, or am I missing something? Anyway, I changed it to "vfat" now.
Offline
It did indeed solve the problem, todays update of the linux package went without any issues. Thanks guys!
Offline