You are not logged in.

#1 2009-10-06 16:37:42

thetrivialstuff
Member
Registered: 2006-05-10
Posts: 191

device naming at boot, and usb & udev

I have an older Arch setup on my "big" desktop that I want to upgrade. The reason it doesn't get updated more often is that on most kernel updates, something breaks in my complex 3 IDE drives and 2 SATA drives in 9 different software (mdadm) RAID arrays setup.

The first arrays (which are the root filesystem, /var, /home, and suchlike) were set up back in the day when IDE hard drives were named /dev/hd* and I didn't have any SATA drives. When the hd* names changed to sd*, I think I was still without SATA drives and everything worked fine. Later I added my first SATA drive and noticed a problem -- the SATA would occasionally jump in and trade names with one of the IDE drives, causing one or more RAID arrays to assemble with 1 out of 2 drives (and a "UUID does not match; kicking drive from array!" message). Then I would have fun trying to figure out which drive had become the "1 out of 2" for that array, so that I didn't synch the older copy onto the newer one by mistake...

Anyway, to solve that I changed my mkinitcpio.conf and some other things to force the IDE drives back to /dev/hd* naming, and somehow I've managed to keep them that way ever since. I have the feeling it's a losing battle, and I only upgrade the kernel when I "have to" and set aside an afternoon for nursing the RAID setup back to coherence when I do.

I think I've about had enough. I want to be able to pacman -Syu again without worrying that half my RAID arrays will break.

So, some questions:

- What's the best way to have these arrays assemble? Since one of them is the root filesystem, I need it to assemble early at boot time, and always assemble correctly.

- Is there any way to force the sdX naming to be consistent every time, with absolute reliability? What I need is for sdc to *stay* sdc, EVEN IF the drive that's normally /dev/sda DIES and is not present at boot. I can't have all the other RAID arrays suddenly go to degraded mode because an unrelated drive failed. The /dev/hd* system did that because the letter referred to "IDE bus number, master/slave", so hdc was always hdc even if it was the first drive.

- If sdX will always be consecutive (and thus change if an earlier drive goes missing), do the symlinks in /dev/disk/by-id or by-path get created early enough that I can use them in my kernel boot line? i.e. can I say

kernel /vmlinuz26 root=/dev/md2 ro md=2,/dev/disk/by-path/pci-0000:00:1f.1-ide-0:0-part3,/dev/disk/by-path/pci-0000:00:1f.1-ide-0:1-part3 ...

...or is the right way to do this now by UUID? The concern I have with UUID is that it's asking the devices to provide their own identification. What would happen if, during a recovery, I was booting with two drives/partitions with the same UUID? (i.e. say a drive failed so I made a dd of the readable parts onto a different one and have both still in there at boot.) I want it to stop cold in that case and ask me what to do, since if it makes the wrong choice about which drive is the 'real' one it could make really bad things happen.

Similarly, if I've dd'd the drive and then removed the bad one, I might not want to assemble the array again right away, since I might want to mount each independently and compare the filesystems before I resynch. If they're being assembled by UUID, I expect it'll go, "Oh, I have two drives for a two-drive array, both have the correct UUID's, so I'll go ahead and assemble/re-synch them even though they're not on the same buses as before."

I would prefer to address drives by physical bus numbering for those reasons.


Finally, a less important question: Is there any way to make USB devices not use the same numbering as internal hard drives, ever? I don't like USB drives showing up with /dev/sd* numbering, because that puts them (or me) one typo away from nuking an internal hard drive if I'm doing something to one of them.

All the examples I've been able to find involve naming specific USB devices (which doesn't work for me because it means I'm back to /dev/sd* when I plug in a USB drive I haven't seen before), or symlinking to a /dev/sd* device (e.g. /dev/usbdrive1 -> /dev/sdc). Symlinking isn't good because while I won't be at risk of nuking an internal drive while doing something to the USB drive, the opposite might still happen if I'm doing something to an internal drive.

(This is related because USB is a likely place for a dd of one of the internal drives to end up. I definitely do not want mdadm trying to assemble an array from one internal drive and one USB drive automatically, ever. If they're both named sd*, I'm afraid that might happen in some weird case I haven't thought of.)

Edit: more paragraph breaks :P

Last edited by thetrivialstuff (2009-10-06 16:44:50)

Offline

#2 2009-10-25 22:00:58

thetrivialstuff
Member
Registered: 2006-05-10
Posts: 191

Re: device naming at boot, and usb & udev

bump to add:

OK, after some experimenting in a virtual machine I've confirmed that having two disks with the same UUID will bork the boot process (kernel panic because it apparently tries to use the wrong one). So UUID is definitely out, for all the reasons above.

Will edit this post and add more experiment results as I get them:

- Confirmed that sdX device names change when an earlier device dies or is removed. So, sdX names can't be used.

- Cool... sda and sdb are now randomly switching places in my virtual machine every other time I boot it. (One is fake scsi, one is fake IDE.) sdX naming *definitely* out.

- Using /dev/disk/by-path/ appears to be stable -- paths stayed consistent as I added and removed disks in front of and behind the test root disk. Setting the BIOS to boot from a different disk also did not change the paths.

- ...and my fears about mdadm hook assembly are confirmed -- I dd'd part of a RAID-1 array to a new disk (on a totally different virtual SCSI bus) and mdadm got confused on the next reboot and attempted to assemble the array from that copy. It borked the boot because it got two identical copies (i.e. if the pieces of the RAID-1 are A and B, it got two A's). The superblocks of different components of a RAID-1 array are slightly different, so I'm surprised that it tried to assemble this at all, but I guess it saw identical UUID's and knew that it was looking for two devices with that UUID and thought, "great, that's both of them, stop looking."

- Confirmed that mdadm will attempt to automatically include an external USB drive in an array if its UUID matches, even though none of the prior devices had ever been on USB when the array was created. Fortunately USB never seems to come early enough in the boot process for this to happen at boot, but it might happen with arrays that are assembled during "assembling RAID arrays" and it is guaranteed to happen if the original internal drive for the second half of that array is missing -- i.e. the only two devices with that UUID are the original "A" from the array, and the USB drive.

Last edited by thetrivialstuff (2009-11-01 21:57:57)

Offline

#3 2009-11-01 21:59:54

thetrivialstuff
Member
Registered: 2006-05-10
Posts: 191

Re: device naming at boot, and usb & udev

That's it for my experiments on this... Now I'm kind of stuck, because {apparently mdadm doesn't support giving symlinks in mdadm.conf for device= specs (so I can't use /dev/disk/by-path in there)}*, and since the real devices (/dev/sdX) are unreliable, and UUID's are unreliable...

*: Never mind; I'm stupid -- forgot to include DEVICE lines for those links in mdadm.conf; everything is happy now smile

Yes, I know my griping about UUID's looks like a bunch of unlikely situations, and if I'd just be careful when using dd it wouldn't be a problem, but look at it from a security point of view: the UUID of the root partition is accessible to non-privileged users. So are flash drives (possibly). So, a non-privileged user is able to make a device with the same UUID as the root partition. Now there's a chance that something will mistake a user-created USB stick for the system's root filesystem. At the moment it seems pretty unlikely that USB will ever initialize fast enough to compete for the root filesystem at boot time, but that might change -- maybe USB3 will initialize faster. How quickly does firewire init? eSATA could *definitely* compete on equal footing, which makes the legitimate root filesystem winning a race condition. (I wonder if a filesystem on a DVD-RAM or +RW would be examined for UUID during boot -- it might even win if the PATA bus the DVD drive is on comes "before" the SATA bus containing the hard drives.)

That's why I believe that relying on a partition to stand up and say "I'm the root filesystem, use me!" is a fundamentally flawed idea.

I'd like some comments on this, from people with more experience using UUID's for filesystems -- aside from this week's virtual machine experiments, I've never used them. If there are some precedence rules or safeguards that guarantee a borked boot in some of the cases I tried, I'd love to hear about it.

Edited for stupidity tongue

Last edited by thetrivialstuff (2009-11-01 23:31:36)

Offline

Board footer

Powered by FluxBB