You are not logged in.
This morning I upgraded my system using the usual "pacman -Syu" and then rebooted. The system was unable to mount /boot along with some other errors. Attached screenshot of errors on screen.
I was able to boot off an archlinux install drive to chroot and downgrade system to Oct 28th using the Arch Linux Archive. I chose that date because I suspected something amiss with the kernel upgrade and maybe kernel modules. Attached screenshot of packages downgraded.
After downgrading system to Oct 28th the system boots fine.
I tried to run "systemctl status boot.mount" - while chrooted it was disallowed, while booted in the freshly downgraded system there were no logs.
Here's output of "dkms status":
evdi, 1.5.0_r2: added
evdi, 1.6.2, 5.3.7-arch1-2-ARCH, x86_64: installed
evdi, 1.6.2, 5.3.7.b-2-hardened, x86_64: installed
wireguard, 0.0.20191012, 5.3.7-arch1-2-ARCH, x86_64: installed
wireguard, 0.0.20191012, 5.3.7.b-2-hardened, x86_64: installed
I've used this system for a couple years now and have not had issues with upgrades (that I couldn't troubleshoot).
Last edited by blufinney (2019-11-05 02:31:25)
Offline
Don't post screenshots of text, paste the actual text: https://wiki.archlinux.org/index.php/Co … s_and_code
From a chroot, compare `pacman -Q linux` and `uname -a`: your /boot was likely not mounted for the upgrade.
Offline
uname from a chrooted environment will tell you the kernel from the system that hosts the chroot, not the one that would have been booted.
You can inspect the unmounted /boot path as well as the boot partition and run "file /boot/vmlinuz-linux" etc. to see the versions of the installed kernel(s)
Offline
Can you unlock the root account (systemd complains it is locked) in order to at least be able to boot the system in rescue mode in order to diagnose the problem (chroot in it and put an empty password for root, you will undo it afterwards). From there inspect "systemctl status boot.mount", /etc/fstab or try to mount /boot manually to see what happens.
Offline
Don't post screenshots of text, paste the actual text
Sorry Jason, I'll try to find an OCR to convert it next time.
Offline
You can inspect the unmounted /boot path as well as the boot partition and run "file /boot/vmlinuz-linux" etc. to see the versions of the installed kernel(s)
From my fully booted (but downgraded to Oct 28th) system I can see both are the same version. I can also replicate the issue by simply upgrading to the latest packages.
I can also confirm my boot directory is there (on my downgraded system) before upgrading again and affectively replicating the issue.
[gbf@archie470s ~]$ ls -la /boot
total 560148
drwxr-xr-x 4 root root 8192 Dec 31 1969 .
drwxr-xr-x 20 root root 4096 Oct 7 14:59 ..
-rwxr-xr-x 1 root root 777 Dec 22 2018 DB.cer
-rwxr-xr-x 1 root root 821 Dec 22 2018 DB.esl
drwxr-xr-x 4 root root 8192 Dec 7 2017 EFI
-rwxr-xr-x 1 root root 134799360 Nov 1 16:45 initramfs-linux-fallback.img
-rwxr-xr-x 1 root root 138260480 Nov 1 16:45 initramfs-linux-hardened-fallback.img
-rwxr-xr-x 1 root root 63784960 Nov 1 16:45 initramfs-linux-hardened.img
-rwxr-xr-x 1 root root 63180800 Nov 1 16:45 initramfs-linux.img
-rwxr-xr-x 1 root root 2577920 Sep 18 11:11 intel-ucode.img
-rwxr-xr-x 1 root root 779 Dec 22 2018 KEK.cer
-rwxr-xr-x 1 root root 823 Dec 22 2018 KEK.esl
-rwxr-xr-x 1 root root 135275 Dec 22 2018 KeyTool.efi
-rwxr-xr-x 1 root root 72960824 Nov 1 16:45 linux-hardened.img
-rwxr-xr-x 1 root root 72110904 Nov 1 16:45 linux.img
drwxr-xr-x 3 root root 8192 Dec 14 2017 loader
-rwxr-xr-x 1 root root 1222 Dec 22 2018 noPK.auth
-rwxr-xr-x 1 root root 0 Dec 22 2018 noPK.esl
-rwxr-xr-x 1 root root 2043 Dec 22 2018 PK.auth
-rwxr-xr-x 1 root root 777 Dec 22 2018 PK.cer
-rwxr-xr-x 1 root root 821 Dec 22 2018 PK.esl
-rwxr-xr-x 1 root root 6289792 Oct 25 04:28 vmlinuz-linux
-rwxr-xr-x 1 root root 6535552 Oct 25 08:19 vmlinuz-linux-hardened
Offline
Can you unlock the root account (systemd complains it is locked) in order to at least be able to boot the system in rescue mode in order to diagnose the problem
No, I was not able to do that. It's cut off at the bottom of the screenshot but it said something like "press enter to continue" which just loops the same message. This is why I used a boot drive to recover to a chroot in order to run a downgrade to Oct 28th.
Offline
The first part of the screenshot seems to have some clues, but I'm having trouble finding anything in the logs that tell me more? Maybe it's not due to the kernels but because kmod and/or dkms was upgraded and is causing trouble?
When chrooted into the broken system I can see the boot directory full of updated kernel. Not sure why the system can't mount /boot anymore when attempting to boot the system.
Do any of these errors give some clues I'm missing? These occur after the upgrade upon boot. I'm at a loss.
Mounting Arbitrary Executable File Formats File System...
[FAILED] Failed to start Load Kernel Modules.
[FAILED] Failed to start CLI Netfilter Manager.
[FAILED] Failed to mount /boot.
[DEPEND] Dependency failed for Local File Systems.
Offline
When chrooted into the broken system I can see the boot directory full of updated kernel.
If this means "without mounting the boot partition" there's your problem.
Either you've been updating w/o the boot partition being mounted XOR the boot partition was mounted, but you're not actually booting from that partition but from the root partition.
Offline
When chrooted into the broken system I can see the boot directory full of updated kernel.
If this means "without mounting the boot partition" there's your problem.
Either you've been updating w/o the boot partition being mounted XOR the boot partition was mounted, but you're not actually booting from that partition but from the root partition.
While chroot'd into my broken system I modify
/etc/pacman.d/mirrorlist
Server = https://archive.archlinux.org/repos/2019/10/28/$repo/os/$arch
Then I run
# pacman -Syyuu
Then I reboot and the system loads successfully.
While successfully booted into my downgraded system I modify
/etc/pacman.d/mirrorlist
Server = https://mirrors.kernel.org/archlinux/$repo/os/$arch
Then I check that my /boot is mounted using
# ls /boot
Then I upgrade the system using
# pacman -Syyuu
Then I reboot and the system is broken - unable to mount /boot
I've repeated this process a couple times hoping to catch something in the logs. I can do it again if I needed for troubleshooting purposes.
Am I not checking that /boot is mounted correctly before I upgrade? I thought using "ls /boot" successfully was a good indicator is was mounted?
the boot partition was mounted, but you're not actually booting from that partition but from the root partition.
I'm not sure I understand? How could this just change willy-nilly on me? So when I downgrade it's all good, and when I upgrade (after 2 years with this same setup) the system decides willy-nilly to attempt boot from the root partition? Is this some kind of new archlinux AI I'm not aware of?
Seriously though, if "pacman -Syu" has the ability to change my system setup it's news to me.
Any other thoughts? I'm hitting a brick wall here.
Offline
From the live media or downgraded system. Is there a vmlinuz-linux in the /boot directory when the /boot filesystem is not mounted?
What bootloader does the system use?
Offline
From the live media or downgraded system. Is there a vmlinuz-linux in the /boot directory when the /boot filesystem is not mounted?
What bootloader does the system use?
Yes, both from chroot (using live media) and downgraded system have vmlinuz-linux.
I'm using systembootd (i.e. bootctl).
Offline
Speaking of systembootd, below is the systemd hook and the script it runs after vmlinuz-* is changed. I reviewed this initially, but I'm not seeing any reason the new packages after Oct 28th wouldn't like this automated setup. I'm also a newb so I could be missing something. To politely state again - this has been working for 2 years, so if the issue is in here it's due to something new with a package after Oct 28th that doesn't like this.
/etc/pacman.d/hooks/secure-boot.hook
[Trigger]
Operation = Install
Operation = Upgrade
Type = File
Target = boot/vmlinuz-*
[Action]
When = PostTransaction
Exec = /bin/sh -c 'while read -r f; do /root/secure-boot/make-sign-image.sh "$f"; done'
NeedsTargets
/root/secure-boot/make-sign-image.sh
#!/bin/bash
FILE=$(echo $1 | sed 's/boot\///')
BOOTDIR=/boot
CERTDIR=/root/keys
KERNEL=$1
INITRAMFS="/boot/intel-ucode.img /boot/initramfs-$(echo $FILE | sed 's/vmlinuz-//').img"
EFISTUB=/usr/lib/systemd/boot/efi/linuxx64.efi.stub
BUILDDIR=_build
OUTIMG=/boot/$(echo $FILE | sed 's/vmlinuz-//').img
CMDLINE=/etc/cmdline
mkdir -p $BUILDDIR
cat ${INITRAMFS} > ${BUILDDIR}/initramfs.img
/usr/bin/objcopy \
--add-section .osrel=/etc/os-release --change-section-vma .osrel=0x20000 \
--add-section .cmdline=${CMDLINE} --change-section-vma .cmdline=0x30000 \
--add-section .linux=${KERNEL} --change-section-vma .linux=0x40000 \
--add-section .initrd=${BUILDDIR}/initramfs.img --change-section-vma .initrd=0x3000000 \
${EFISTUB} ${BUILDDIR}/combined-boot.efi
/usr/bin/sbsign --key ${CERTDIR}/DB.key --cert ${CERTDIR}/DB.crt --output ${BUILDDIR}/combined-boot-signed.efi ${BUILDDIR}/combined-boot.efi
cp ${BUILDDIR}/combined-boot-signed.efi ${OUTIMG}
Offline
To state just as politely for the nth time: the mere fact, that you have a kernel in your root partitions /boot path (or actually anything in there) is über-fishy.
Along the inability to load kernel modules after a kernel update this usually means that you're not updating the kernel into the partition that is actually used for booting.
Boot the downgraded system, do NOT mount the /boot partition (unmount it in doubt), update the system and reboot. Does that work?
Offline
To state just as politely for the nth time: the mere fact, that you have a kernel in your root partitions /boot path (or actually anything in there) is über-fishy.
Along the inability to load kernel modules after a kernel update this usually means that you're not updating the kernel into the partition that is actually used for booting.
There isn't a kernel in my root partitions /boot path. Immediately after booting from live media I can do this:
# ls /boot
memtest86+ syslinux
I do this to chroot, before downgrading the broken system
# cryptsetup luksOpen /dev/nvme0n1p2 enc
# mount /dev/enc/root /mnt
# mount /dev/enc/home /mnt/home
# swapon /dev/enc/swap
# mount /dev/nvme0n1p1 /mnt/boot
# arch-chroot /mnt /bin/bash
Once I'm chroot'd (with proper mounting) there is vmlinuz in the /boot path. This is how it should work when using systembootd (and other bootloaders), correct?
Boot the downgraded system, do NOT mount the /boot partition (unmount it in doubt), update the system and reboot. Does that work?
After further elaborating my setup and what I'm doing, are you still recommending I do this? Seems like a bad idea?
Thanks for being patient with me.
Offline
Does the journal contain entries for a boot where /boot can not be mounted? If so can you please post that journal contents.
Edit:
Also can you for diagnostic purposes unlock the root account so you can access the rescue shell?
Last edited by loqs (2019-11-04 17:30:33)
Offline
After further elaborating my setup and what I'm doing, are you still recommending I do this? Seems like a bad idea?
No, that's based on a misunderstanding on the actual system condition and yes: would be a bad idea.
Offline
Does the journal contain entries for a boot where /boot can not be mounted? If so can you please post that journal contents.
Here I used journalctl --list-boots then journalctl -b -10 to get logging from the failed boot (just learned how to do this, cool!)
Nov 01 08:50:57 archie470s systemd[1]: Mounting /boot...
Nov 01 08:50:57 archie470s mount[481]: mount: /boot: unknown filesystem type 'vfat'.
Nov 01 08:50:57 archie470s systemd[1]: boot.mount: Mount process exited, code=exited, status=32/n/a
Nov 01 08:50:57 archie470s systemd[1]: boot.mount: Failed with result 'exit-code'.
Nov 01 08:50:57 archie470s systemd[1]: Failed to mount /boot.
Nov 01 08:50:57 archie470s systemd[1]: Dependency failed for Local File Systems.
Nov 01 08:50:57 archie470s systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Nov 01 08:50:57 archie470s systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
I found this interesting. Is it as interesting as it seems?
Nov 01 08:50:57 archie470s mount[481]: mount: /boot: unknown filesystem type 'vfat'.
Did a recent upgraded package remove vfat or something? I didn't see a change to dosfstools.
When I originally prepared this system a couple years ago I did this to prep the boot partition:
# sgdisk --zap-all /dev/nvme0n1
# cgdisk /dev/nvme0n1
# mkfs.vfat -F32 /dev/nvme0n1p1
Pretty standard stuff, I think?
Edit:
Also can you for diagnostic purposes unlock the root account so you can access the rescue shell?
I either cannot - or I simply don't know how to? It says something like press enter to continue. Upon pressing enter is says the same thing again.
Offline
This is a typical sign for the on-disk modules not matching the kernel.
Can you boot the fallback initramfs?
# passwd -u root
Offline
The rest of that journal would also be helpful as it should indicate if /boot was being mounted by the initrd or after switch root by systemd.
Edit:
Also at the start of that journal there might be a message similar to:
Warning /lib/modules/5.3.7-arch1-ARCH/modules.device not found - ignoring
Last edited by loqs (2019-11-04 19:52:45)
Offline
The rest of that journal would also be helpful as it should indicate if /boot was being mounted by the initrd or after switch root by systemd.
Edit:
Also at the start of that journal there might be a message similar to:Warning /lib/modules/5.3.7-arch1-ARCH/modules.device not found - ignoring
Here is the full output:
https://pastebin.com/raw/EPaigJ7G
Offline
Please post the contents of pacman.log from when the issue started. Do you use any custom pacman hooks such as to automate signing the kernel?
Edit:
From the journal:
Nov 04 13:01:53 archie470s kernel: Linux version 5.3.7.b-2-hardened (linux-hardened@archlinux) (gcc version 9.2.0 (GCC)) #1 SMP PREEMPT @1572016775
....
Nov 04 13:01:53 archie470s ufw-init[367]: modprobe: FATAL: Module nf_conntrack_ftp not found in directory /lib/modules/5.3.7.b-2-hardened
Last edited by loqs (2019-11-05 00:10:29)
Offline
Please post the contents of pacman.log from when the issue started. Do you use any custom pacman hooks such as to automate signing the kernel?
Here's the pacman.log of the upgrade that caused the failed boot. I do use a custom pacman hook to sign the kernel, see post #13, I put the hook and referenced script used.
[2019-11-01T08:41:34-0700] [PACMAN] Running 'pacman -Syu'
[2019-11-01T08:41:34-0700] [PACMAN] synchronizing package lists
[2019-11-01T08:41:35-0700] [PACMAN] starting full system upgrade
[2019-11-01T08:42:11-0700] [ALPM] running '70-dkms-remove.hook'...
[2019-11-01T08:42:11-0700] [ALPM-SCRIPTLET] ==> dkms remove wireguard/0.0.20191012 -k 5.3.7-arch1-2-ARCH
[2019-11-01T08:42:17-0700] [ALPM-SCRIPTLET] ==> dkms remove evdi/1.6.2 -k 5.3.7.b-2-hardened
[2019-11-01T08:42:24-0700] [ALPM-SCRIPTLET] ==> dkms remove wireguard/0.0.20191012 -k 5.3.7.b-2-hardened
[2019-11-01T08:42:29-0700] [ALPM-SCRIPTLET] ==> dkms remove evdi/1.6.2 -k 5.3.7-arch1-2-ARCH
[2019-11-01T08:42:34-0700] [ALPM] transaction started
[2019-11-01T08:42:34-0700] [ALPM] upgraded x265 (3.2-1 -> 3.2.1-1)
[2019-11-01T08:42:34-0700] [ALPM] upgraded kmod (26-2 -> 26-3)
[2019-11-01T08:42:38-0700] [ALPM] upgraded chromium (78.0.3904.70-1 -> 78.0.3904.87-1)
[2019-11-01T08:42:38-0700] [ALPM] upgraded cython2 (0.29.13-1 -> 0.29.14-1)
[2019-11-01T08:42:38-0700] [ALPM] upgraded dkms (2.7.1-1 -> 2.8.1-1)
[2019-11-01T08:42:41-0700] [ALPM] upgraded firefox (70.0-1 -> 70.0.1-1)
[2019-11-01T08:42:41-0700] [ALPM] upgraded geoclue (2.5.5-2 -> 2.5.5+6+gea52170-1)
[2019-11-01T08:42:42-0700] [ALPM] upgraded imagemagick (7.0.8.68-2 -> 7.0.9.2-1)
[2019-11-01T08:42:42-0700] [ALPM] upgraded libmagick6 (6.9.10.70-1 -> 6.9.10.71-1)
[2019-11-01T08:42:42-0700] [ALPM] upgraded mkinitcpio (26-1 -> 27-1)
[2019-11-01T08:42:45-0700] [ALPM] upgraded linux (5.3.7.arch1-2 -> 5.3.8.1-1)
[2019-11-01T08:42:49-0700] [ALPM] upgraded linux-hardened (5.3.7.b-2 -> 5.3.7.b-3)
[2019-11-01T08:42:53-0700] [ALPM] upgraded linux-hardened-headers (5.3.7.b-2 -> 5.3.7.b-3)
[2019-11-01T08:42:56-0700] [ALPM] upgraded linux-headers (5.3.7.arch1-2 -> 5.3.8.1-1)
[2019-11-01T08:42:56-0700] [ALPM] upgraded zbar (0.23-1 -> 0.23-2)
[2019-11-01T08:42:56-0700] [ALPM] transaction completed
[2019-11-01T08:42:57-0700] [ALPM] running '20-systemd-sysusers.hook'...
[2019-11-01T08:42:57-0700] [ALPM] running '30-systemd-daemon-reload.hook'...
[2019-11-01T08:42:57-0700] [ALPM] running '30-systemd-tmpfiles.hook'...
[2019-11-01T08:42:57-0700] [ALPM] running '30-systemd-update.hook'...
[2019-11-01T08:42:57-0700] [ALPM] running '60-depmod.hook'...
[2019-11-01T08:43:07-0700] [ALPM] running '70-dkms-install.hook'...
[2019-11-01T08:43:07-0700] [ALPM-SCRIPTLET] ==> dkms install evdi/1.6.2 -k 5.3.8-arch1-1
[2019-11-01T08:43:17-0700] [ALPM-SCRIPTLET] ==> dkms install wireguard/0.0.20191012 -k 5.3.8-arch1-1
[2019-11-01T08:43:31-0700] [ALPM-SCRIPTLET] ==> dkms install evdi/1.6.2 -k 5.3.7.b-3-hardened
[2019-11-01T08:43:40-0700] [ALPM-SCRIPTLET] ==> dkms install wireguard/0.0.20191012 -k 5.3.7.b-3-hardened
[2019-11-01T08:43:54-0700] [ALPM] running '90-mkinitcpio-install.hook'...
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux-hardened.preset: 'default'
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux-hardened -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-hardened.img
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] ==> Starting build: 5.3.7.b-3-hardened
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> Running build hook: [base]
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> Running build hook: [udev]
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> Running build hook: [autodetect]
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> Running build hook: [modconf]
[2019-11-01T08:43:54-0700] [ALPM-SCRIPTLET] -> Running build hook: [block]
[2019-11-01T08:43:55-0700] [ALPM-SCRIPTLET] -> Running build hook: [keymap]
[2019-11-01T08:43:55-0700] [ALPM-SCRIPTLET] -> Running build hook: [encrypt]
[2019-11-01T08:43:56-0700] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> Running build hook: [resume]
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> Running build hook: [fsck]
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] ==> Generating module dependencies
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] ==> Creating uncompressed initcpio image: /boot/initramfs-linux-hardened.img
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] ==> Image generation successful
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux-hardened.preset: 'fallback'
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux-hardened -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-hardened-fallback.img -S autodetect
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] ==> Starting build: 5.3.7.b-3-hardened
[2019-11-01T08:43:57-0700] [ALPM-SCRIPTLET] -> Running build hook: [base]
[2019-11-01T08:43:58-0700] [ALPM-SCRIPTLET] -> Running build hook: [udev]
[2019-11-01T08:43:58-0700] [ALPM-SCRIPTLET] -> Running build hook: [modconf]
[2019-11-01T08:43:58-0700] [ALPM-SCRIPTLET] -> Running build hook: [block]
[2019-11-01T08:43:59-0700] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: wd719x
[2019-11-01T08:43:59-0700] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: aic94xx
[2019-11-01T08:44:01-0700] [ALPM-SCRIPTLET] -> Running build hook: [keymap]
[2019-11-01T08:44:01-0700] [ALPM-SCRIPTLET] -> Running build hook: [encrypt]
[2019-11-01T08:44:02-0700] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
[2019-11-01T08:44:02-0700] [ALPM-SCRIPTLET] -> Running build hook: [resume]
[2019-11-01T08:44:02-0700] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
[2019-11-01T08:44:03-0700] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
[2019-11-01T08:44:04-0700] [ALPM-SCRIPTLET] -> Running build hook: [fsck]
[2019-11-01T08:44:05-0700] [ALPM-SCRIPTLET] ==> Generating module dependencies
[2019-11-01T08:44:05-0700] [ALPM-SCRIPTLET] ==> Creating uncompressed initcpio image: /boot/initramfs-linux-hardened-fallback.img
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] ==> Image generation successful
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'default'
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux.img
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] ==> Starting build: 5.3.8-arch1-1
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> Running build hook: [base]
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> Running build hook: [udev]
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> Running build hook: [autodetect]
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> Running build hook: [modconf]
[2019-11-01T08:44:06-0700] [ALPM-SCRIPTLET] -> Running build hook: [block]
[2019-11-01T08:44:07-0700] [ALPM-SCRIPTLET] -> Running build hook: [keymap]
[2019-11-01T08:44:07-0700] [ALPM-SCRIPTLET] -> Running build hook: [encrypt]
[2019-11-01T08:44:08-0700] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
[2019-11-01T08:44:08-0700] [ALPM-SCRIPTLET] -> Running build hook: [resume]
[2019-11-01T08:44:08-0700] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
[2019-11-01T08:44:08-0700] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> Running build hook: [fsck]
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] ==> Generating module dependencies
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] ==> Creating uncompressed initcpio image: /boot/initramfs-linux.img
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] ==> Image generation successful
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] ==> Building image from preset: /etc/mkinitcpio.d/linux.preset: 'fallback'
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> -k /boot/vmlinuz-linux -c /etc/mkinitcpio.conf -g /boot/initramfs-linux-fallback.img -S autodetect
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] ==> Starting build: 5.3.8-arch1-1
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> Running build hook: [base]
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> Running build hook: [udev]
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> Running build hook: [modconf]
[2019-11-01T08:44:09-0700] [ALPM-SCRIPTLET] -> Running build hook: [block]
[2019-11-01T08:44:10-0700] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: wd719x
[2019-11-01T08:44:11-0700] [ALPM-SCRIPTLET] ==> WARNING: Possibly missing firmware for module: aic94xx
[2019-11-01T08:44:12-0700] [ALPM-SCRIPTLET] -> Running build hook: [keymap]
[2019-11-01T08:44:12-0700] [ALPM-SCRIPTLET] -> Running build hook: [encrypt]
[2019-11-01T08:44:13-0700] [ALPM-SCRIPTLET] -> Running build hook: [lvm2]
[2019-11-01T08:44:13-0700] [ALPM-SCRIPTLET] -> Running build hook: [resume]
[2019-11-01T08:44:13-0700] [ALPM-SCRIPTLET] -> Running build hook: [filesystems]
[2019-11-01T08:44:14-0700] [ALPM-SCRIPTLET] -> Running build hook: [keyboard]
[2019-11-01T08:44:15-0700] [ALPM-SCRIPTLET] -> Running build hook: [fsck]
[2019-11-01T08:44:17-0700] [ALPM-SCRIPTLET] ==> Generating module dependencies
[2019-11-01T08:44:17-0700] [ALPM-SCRIPTLET] ==> Creating uncompressed initcpio image: /boot/initramfs-linux-fallback.img
[2019-11-01T08:44:17-0700] [ALPM-SCRIPTLET] ==> Image generation successful
[2019-11-01T08:44:17-0700] [ALPM] running 'dbus-reload.hook'...
[2019-11-01T08:44:17-0700] [ALPM] running 'detect-old-perl-modules.hook'...
[2019-11-01T08:44:17-0700] [ALPM] running 'gtk-update-icon-cache.hook'...
[2019-11-01T08:44:17-0700] [ALPM] running 'paccache-clean.hook'...
[2019-11-01T08:44:17-0700] [ALPM-SCRIPTLET]
[2019-11-01T08:44:17-0700] [ALPM-SCRIPTLET] ==> finished: 15 packages removed (disk space saved: 293.6 MiB)
[2019-11-01T08:44:17-0700] [ALPM] running 'update-desktop-database.hook'...
I also noticed the ufw-init fatal errors in the boot log, but I figured it was due to whatever the root issue was and would go away when the root issue was fixed. Maybe wishful thinking?
Offline
Apologies I missed post #13
Target = boot/vmlinuz-*
Will not work as new kernels no longer supply anything in /boot or /etc possibly use
usr/lib/modules/*/vmlinuz
see https://git.archlinux.org/mkinitcpio.git/ for the changes made to accomodate the new kernel packaging.
If your hook did not fire what kernel and initrd would be used?
Edit:
You should probably also order your hook after /usr/share/libalpm/hooks/90-mkinitcpio-install.hook
Last edited by loqs (2019-11-05 00:55:41)
Offline
Target = boot/vmlinuz-*
Will not work as new kernels no longer supply anything in /boot or /etc possibly use
usr/lib/modules/*/vmlinuz
Odd, /boot/vmlinuz-linux and /boot/vmlinuz-linux-hardened receive a new timestamp after upgrading to the latest packages, but it certainly doesn't trigger the custom signing hook.
see https://git.archlinux.org/mkinitcpio.git/ for the changes made to accomodate the new kernel packaging.
Awesome! Thanks for pointing that out. I did see that mkinitcpio was upgraded from 26 to 27, but had no idea about the new packaging. Can we just ask him to revert it so my system doesn't break? lol, jk.
If your hook did not fire what kernel and initrd would be used?
It should use the ones in /boot based on my systembootd config. I believe this means after upgrading it's trying to use the old signed kernel with the new initramfs.
# cat /boot/loader/entries/arch_hardened.conf
title Archie470s Hardened
efi /linux-hardened.img
Since the custom hook never runs, a new/signed /boot/linux-hardened.img never gets re-created during the upgrade.
Edit:
You should probably also order your hook after /usr/share/libalpm/hooks/90-mkinitcpio-install.hook
Ok. Maybe I can also use the [Trigger] from 90-mkinitcpio-install.hook for my custom signing hook. It appears these changes completely break my signing script - but that's a separate issue.
Thanks for your help!
Offline