You are not logged in.
Full disclosure I do not currently have an issue, it's just something that has happened each time I have attempted dual boot where the 2 systems do not share the same disk and seem to fail to realize why it happens. Curious if anyone has ran into this and maybe knows the cause.
Just built a new PC over the weekend I got arch going and that's all happy but Id like to pop in another nvme and install windows for work related tasks.
Setup:
-Install Arch Linux on nvme drive /dev/nvme0n1, bootloader: grub (os prober enabled, using uuid for crypt device and root partition)
-Install Windows on nvme drive /dev/nvme1n1
Problem:
At first everything runs fine. Arch boots, asks for crypt pswd and correctly unlocks, finds root and loads SDDM, login is fine. I can also shutdown/reboot select windows from grub, that boots and works fine. Then, randomly after a reboot or cold boot the device names will change and grub will fail to unlock luks and not find the drive. After investigating, lsblk and fdisk but say Arch in now /dev/nvme1n1 and windows is now /dev/nvme0n1. Windows will keep on working no matter what but I then need to generate new uuids, update fstab, and fix grub. What I found interesting about the issue is that it does not matter where the drives are located on the board. I think it is a device detection thing from POST or when grub loads. Same outcome on 2 different laptops and a desktop PC.
CSM is off
Fast boot is enabled
I have secure boot off.
all hardware is within the last couple years.
running kernel 6.13.5-237-tkg-pds
Last edited by live4thamuzik (Yesterday 22:59:07)
Offline
This is "normal". Device names in Linux are not fixed by design. sda, sdb, sdc can change their order. nvme0, nvme1, nvme2 likewise.
Whichever drive gets detected first, gets assigned a name first. This is random. Even if it was deterministic, unplugging a single drive at the start/in the middle would still shift names for all drives that follow.
This is why you use UUIDs for everything, or one of the /dev/disk/by-* symlinks.
For fixed drive names, you'd have to pass a list of serial numbers or pci paths to the kernel or something, so specific drive names can be reserved for specific drives. However no such system is in place, so it can't be done.
Offline
This is "normal". Device names in Linux are not fixed by design. sda, sdb, sdc can change their order. nvme0, nvme1, nvme2 likewise.
Whichever drive gets detected first, gets assigned a name first. This is random. Even if it was deterministic, unplugging a single drive at the start/in the middle would still shift names for all drives that follow.
This is why you use UUIDs for everything, or one of the /dev/disk/by-* symlinks.
For fixed drive names, you'd have to pass a list of serial numbers or pci paths to the kernel or something, so specific drive names can be reserved for specific drives. However no such system is in place, so it can't be done.
Thanks, I had a sneaking suspicion device detection was at play when it happened while using UUID. I guess if I want windows on its own hard drive I will need to use a sata ssd and keep the nvme for linux. Allowing linux to always be "nvme0" and windows will always appear as "sdX".
Offline
Yes, well, if that's somehow important to you. Even with just a single drive, you have to use UUIDs. Partitions can change their numbers too and all (if anything decided to edit the partition table for any reason).
If you have an USB stick, that could also end up as sda, and internal drive as sdb...
Basically if you use device names on the command line (for dd, fdisk, mount, ...), you have to develop the habit of checking which device is what first.
You can also use LVM in Linux, that way your Linux filesystems get /dev/vg/lv names that do not usually change. And LVM uses uuids and metadata internally.
Last edited by frostschutz (Yesterday 22:44:19)
Offline
Okay that makes sense. I am still fairly new to linux. I remember reading that using UUID was the best practice in the wiki and have stuck to it. My current disk setup I like is partiton 1 (efi), partition 2 (/boot), partition 3 (luks then LVM with lv_root and LV_home). I can get away with most of my needs with qemu and libvirt. However, I just like running on baremetal and prefer to keep the two systems seperate.
Thanks for the insight.
Offline
but I then need to generate new uuids, update fstab, and fix grub.
uhm ... what?
you seem to do something wrong
uuids are, like thier name imply, unique identifiers - they don't change, there's no need to update fstab or grub
menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-15c36a64-f11c-4f7e-aa6a-3018475d6f6c' {
savedefault
load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set=root 15c36a64-f11c-4f7e-aa6a-3018475d6f6c
echo 'Linux linux wird geladen …'
linux /boot/vmlinuz-linux root=UUID=15c36a64-f11c-4f7e-aa6a-3018475d6f6c rw
echo 'Initiale Ramdisk wird geladen …'
initrd /boot/amd-ucode.img /boot/initramfs-linux.img
}
# <file system> <dir> <type> <options> <dump> <pass>
# /dev/nvme0n1p3
UUID=15c36a64-f11c-4f7e-aa6a-3018475d6f6c / ext4 rw,relatime,stripe=32 0 1
# /dev/nvme0n1p4
UUID=1492773a-1139-4b8e-bfd3-d4558eb64b55 /home ext4 rw,relatime,stripe=32 0 2
# /dev/nvme0n1p1
UUID=B053-F0B4 /efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p2
UUID=da7e198e-53a6-4ccc-be94-761864c95bc9 none swap defaults 0 0
to me your problems sounds like user error by using the device paths instead of the uuids
read the instsll guide carefull:
genfstab -U
is for using uuids
grub-mkconfig also defaults to using uuids
if you need to change anything due to nvme0 and nvm1 get swapped you obvious use some non-unique identifier relying on a specific order of these base kernel names
Offline