You are not logged in.

#1 2021-09-12 19:47:23

atomic513
Member
Registered: 2020-12-02
Posts: 3

[SOLVED]failed to mount /boot,Dependency failed for Local File Systems

When I try to boot arch, I get:

Starting version 249.4-1-arch
/dev/sdb3: clean, 145507/15466496 files, 3284030/61859153 blocks
[FAILED] Failed to mount /boot.
[DEPEND] Dependency failed for Local File Systems.
You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode.
Give root password for maintenance
(or press Control-D to continue): _

from what I could gather reading other forum posts, This was caused by me doing sudo pacman -Syu. I don't know why it did this, a lot of the threads are too technical for me to understand. My first idea was to run pacman -Syyuu, but it gave a bunch of errors. either "warning: too many errors from [xxxxx], skipping for the remainder of this transaction" or "error: failed retrieving file 'core.db' from [xxxxx] : Could not resolve host: [xxxxx]". in the end it returns "error: failed to synchronize all databases (invalid url for server)
I can't give the full output because when I try to pipe it into sprunge, it returns "curl: (6) Could not resolve host: sprunge.us"
When I check lsblk, my root and swap partition are mounted, but my boot one isn't.
I can't really tell what to do now. I can provide more info if needed.

Last edited by atomic513 (2021-09-15 20:51:56)

Offline

#2 2021-09-12 20:19:16

seth
Member
Registered: 2012-09-03
Posts: 56,279

Re: [SOLVED]failed to mount /boot,Dependency failed for Local File Systems

The common cause is that you forgot to mount /boot before updating or rather that you're actually booting from the root partition - in any event the booting kernel doesn't match the installed one and you can't load the vfat module and systemd being a cunt and stopping the boot for no actually good reason…

"uname -a"?

You can comment the boot partition in your fstab and the systemd should™ boot afterwards (or at least fail at a different point)
You'll probably still not have a functioning network, but can re-install the kernel out of the package cache ("pacman -S linux" or "pacman -S linux-lts")
Alternatively chroot from the installation iso.

Signs that this is what happened is that
1. the output of "uname -a" shows an older kernel
2. There're vmlinuz-linux and initramfs-*.img files in the *unmounted* /boot path and "file /boot/vmlinuz-linux" matches the running kernel

"curl: (6) Could not resolve host: sprunge.us"
"error: failed retrieving file 'core.db' from [xxxxx] : Could not resolve host: [xxxxx]"

Because you've no network connction.

My first idea was to run pacman -Syyuu

Wherever you got that from, unlike, unsubscribe and tell all your friends to do the same.
The command is to enforce downgrades when dealing w/ broken mirrors. It doesn't mean "update more and better" or whatever ppl. promoting it seem to think it does.

Offline

#3 2021-09-12 20:53:44

atomic513
Member
Registered: 2020-12-02
Posts: 3

Re: [SOLVED]failed to mount /boot,Dependency failed for Local File Systems

uname -a gives:

Linux Thecoollaptop 5.13.13-arch1-1 #1 SMP PREEMPT Thu, 26 Aug 2021 19:14:36 +0000 x86_64 GNU/Linux

In /boot/ there's 2 directories, EFI and grub, and 3 files. initramfs-linux~.img, initramfs-linux.img, and vmlinuz-linux
I ran file /boot/vmlinuz-linux and the version type was the same.

You can comment the boot partition in your fstab and the systemd should™ boot afterwards (or at least fail at a different point)

I commented out my boot partition and you were right! it was able to boot, but it failed(or was that what it was supposed to do?) at a different point. I don't know how to describe it well but it boots to that thing where if you press ctrl+alt+Function key it will go to different tty's.
Is my understanding correct that because I commented out the boot partition in fstab, lightdm and awesomewm (the ones I normally use) don't start?

This is looking promising so I want to make sure I don't mess something up before knowing exactly what I need to do.

You'll probably still not have a functioning network, but can re-install the kernel out of the package cache ("pacman -S linux" or "pacman -S linux-lts")

When you say re-installing the kernel, do you mean all I have to do is "pacman -S linux"?
Then after I do that do I un-comment the boot partition?

Offline

#4 2021-09-13 06:31:33

seth
Member
Registered: 2012-09-03
Posts: 56,279

Re: [SOLVED]failed to mount /boot,Dependency failed for Local File Systems

Linux Thecoollaptop 5.13.13-arch1-1 #1 SMP PREEMPT Thu, 26 Aug 2021 19:14:36 +0000 x86_64 GNU/Linux

Well, we're at 5.14.2.arch1-2 so this is an old one. Congrats, we found the problem ;-)

Is my understanding correct that because I commented out the boot partition in fstab, lightdm and awesomewm (the ones I normally use) don't start?

No, the boot partition isn't relevant at runtime, but you're likely missing more modules.

When you say re-installing the kernel, do you mean all I have to do is "pacman -S linux"?

Yes.
But you have to boot at least the rescue.target (2nd link below) or chroot in from eg. the installation iso.

Then after I do that do I un-comment the boot partition?

NO!

Your problem is that you're not booting from/using that partition, ie. mounting it only moves it into the way - causing the problems you're facing right now.
If you chose to use the boot partition, it's very important to also grub-mkconfig, possibly grub-install with the /boot partition mounted - so you end up actually booting from there.
(Only) Then you must have it mounted at least whenever you update the kernel to not run into the current problem again.

Offline

#5 2021-09-15 20:50:02

atomic513
Member
Registered: 2020-12-02
Posts: 3

Re: [SOLVED]failed to mount /boot,Dependency failed for Local File Systems

Sorry for replying almost a week later, I got caught up in IRL stuff. Thank you for your help! I would have never been able to solve this on my own at my current skill level. For any other people who see this thread in the future, here was what I did step by step:

1.) comment out the boot partition in /etc/fstab
2.) reboot
3.) login as root
4.) run systemctl isolate rescue.target (from the arch wiki: "This will only change the current target, and has no effect on the next boot.")
5.) run pacman -S linux
6.) type "exit" to exit the rescue mode
7.) log in as a normal user
8.) reboot

I have a few questions left behind though, first and most importantly, Why did this happen in the first place? I swear I've updated the kernel before and I never had this problem. Was it just bad luck? What can I do to prevent this in the future?

second:

If you chose to use the boot partition, it's very important to also grub-mkconfig, possibly grub-install with the /boot partition mounted - so you end up actually booting from there. (Only) Then you must have it mounted at least whenever you update the kernel to not run into the current problem again.

My system seems to work like normal now, but I don't know if I should risk doing grub-mkconfig and grub-install just in case I mess something up. Is there no disadvantage to not remounting the boot partition? If I don't will the next kernel update be messed up?

thanks again for your help. I'll mark this thread as solved.

EDIT:
I was able to add my boot partition again! I mounted my old partition on /mnt, then I did sudo grub-mkconfig -o /mnt/grub/grub.cfg, uncommented the boot partition in fstab, and then rebooted. (It might be a good idea to back up your /boot first just in case)

Last edited by atomic513 (2021-09-19 00:01:05)

Offline

Board footer

Powered by FluxBB