The issue on my new computer used to only come up when I had snapshots (usually after updating arch, in case I need to rollback).
Nothing I could do with lvm or lvmetad on the rescue command line would get things to proceed. However, if I booted enough times, things would randomly work. So it does seem like a timing thing to me. So far, no issues with snapshots using the global_filter solution.
Everyone who is still having trouble, try changing the global filter line in /etc/lvm/lvm.conf from
# global_filter = []
to
global_filter = [ "r|^/dev/fd[0-9]$|", "r|^/dev/sr[0-9]$|" ]
Then run mkinitcpio -p linux again.
And it solved the problem for me too!
I have an old Athlon XP with 2 hard disks using LVM2 and a floppy drive. This trick fixed the "end_request: I/O error, dev fd0, sector 0" that prevented the root partition to be detected at boot before timeout.
Thanks!
]]>First things first, I just tried the sleep hook. It works with a plain 5s sleep and also if I hardcode poll_device for all my logical volumes. This is great, but will probably be lost on the mkinitcpio update (where the hook will just sleep 5s again). I can very well live with this solution.
Can you give me a bit more detail on this, i.e. what exactly you put in your mkinitcpio.conf etc. ?
I am suffering from this annoying problem (see here) and have not found a solution yet. Thanks a lot.
]]>So I took the lvm2 ABS sources and then modified them to use the git source instead (renaming them to lvm2-git and device-mapper-git). It was super trivial to do. I have tested three boots in a row, all of which seem to work flawlessly.
Here is the diff for what is in the ABS right now (2.02.98-4): http://pastebin.com/B2nCsFYn
Hopefully this might help someone else who might still be struggling with this. Maybe there will be a new stable release soon, which would likely include whatever it is that fixed this for me...
]]>End up re-installing my server while moving most of the precious data into new NAS-device. When I was copying the data into new location, I noticed how second disc in RAID-1 had out of sync for some time - partially causing the issues I had with this upgrade. However, the sync issue had started *long before* the upgrade, but somehow it brought the issue up.
All in all, everything's fine now: I've been happy with NAS, no data was lost and re-installed Arch setup is working just fine for me.
]]>[root@nomad ~]# umount /home/data/; umount /home/; lvchange -a n /dev/mapper/vg-home ; vgchange -a n vg ; vgmerge -v vgsys vg
umount: /home/data/: not mounted
0 logical volume(s) in volume group "vg" now active
Checking for volume group "vg"
Checking for volume group "vgsys"
Archiving volume group "vg" metadata (seqno 24).
Archiving volume group "vgsys" metadata (seqno 12).
Writing out updated volume group
/dev/sda1: lseek 18446744071600930816 failed: Invalid argument
google suggests a missing #include <unistd.h>, but the current lvm code (lib/device/dev-io.c) has that include. But now I'm getting really offtopic, so thanks WonderWoofy for all your help. I guess this is something for the lvm guys.
]]>LVM2 has a crazy amount of cool features and options. From waht you have told me, it just seems like you don't fully grasp the amazing benefits that using LVM2 can bring. So I am not saying that you cannot use the LVM2 in the way you are. But I think that there are better ways to utilize its power. You can do what you want. I just want to make you aware.
You're absolutely right, I feel like I've totally missed the best parts of LVM. I just used it as magic out-of-place-add-another-disc tool. Although I've read a lot about LVM, I never understood the idea about physical extents and the way LVM allocates them, also I didn't care too much 'cause nobody pointed me that way and I just assumed my scenario is more or less the whole LVM thing. Except the LVM-level RAIDing, I knew about that, just didn't need it.
Anyway, thanks for your insights and example, now I also understand what lvdisplay -m is for ... I will start merging right now .
]]>So if you are a visual kind of person, here is an example. Say you have a volume group "computer" and you have a logical volume called "stuffs". But your setup spans two normal drives /dev/sda2 and /dev/sdb1 and a RAID 0 array /dev/md127. You want to expand by 23.7GB the "stuffs" volume, but you want to ensure that it uses only space available on /dev/md127 (does this example sound familiar?). You would do:
# lvresize -L +23.7G computer/stuffs /dev/md127
This assumes that you created a logical volume on /dev/md127 without creating partitions. In case you are not aware, LVM2 can handle whole disks w/o partitions quite well.
Also, I'm not sure how willing you would be to change your setup (or rather if you have the available disk space to move stuff around), but LVM2 offers integrated RAID levels in its functionality. So you can actually create a large lvm storage pool (a volume group) that is across several special block devices. Then you can create linear (normal) logical volumes as you wish. But you can also throw in striped logical volumes as you wish. Or you can create mirrored logical volumes as well. These can all coexist within the same volume group, and is handled with the specification of the physical volume location option as examplified above.
You should know though that in your case it using LVM2 striping (RAID 0) will work the same way as what you have going now. It will be faster, yet have half the fault tollerance. But RAID1 on mdadm will use both copies to read the data. So the resulting read speeds will be similar to RAID0. With LVM2, the mirroring (RAID1) functionality apparently does not bring this benefit. So the read speeds with mirroring in LVM will be the same as having a single copy of the disk.
LVM2 has a crazy amount of cool features and options. From waht you have told me, it just seems like you don't fully grasp the amazing benefits that using LVM2 can bring. So I am not saying that you cannot use the LVM2 in the way you are. But I think that there are better ways to utilize its power. You can do what you want. I just want to make you aware.
]]>You certainly can combine lvm volumes into a big single one with vgmerge. Then you can use the lvm tools to also ensure that the logical volumes stay on a the device of your choice. This seems like a better option than having three volume groups for three logical volumes. The thing is that with three volume groups, there is no practical advantage to having that over regular partitions.
Yes, I can easily merge the root and home vgs and keep the lvs separated, but I see no way to also merge the volume group on my raid device. I want to make sure that /home/data is located on the raid discs. And if I merge the raid volume group I don't think there is a way to map a logical volume to specific physical volumes, after all, this is what lvm hides.
Anyway, thanks a lot for your help. I have a booting system again and know that I have to take care about the lvm setup. I really wouldn't mind just using the 5s sleep hook, I'm rarely in such a hurry
]]>What I mean by old setup is the package versions of lvm and device-mapper that did not use lvmetad, but instead simply used a series of simple commands in a script to activate the volume groups. If you view the package from the web site, you can click the option in the upper right hand corner to view the changes to the package. Then you can track back to around november or december when the package was built in that way. You can easily tell by the commits when that took place. Then just use those version of the files to build the package yourself. Oh BTW, the lvm2 package is actually a split package that will build both the lvm2 and device-mapper packages.
]]>I see, so what you are saying is that sometimes it fails waiting for the rootfs, but sometimes it fails waiting for the second lvm in order to fsck the volumes?
Almost right . The no-rootfs-error was resolved with the global_filter expression. But I have another 2 data volume groups, see below.
So why not combine the two lvm volumes, and then just make sure that the right volumes are dedicated to the right devices. This is actually totally possible with lvm and actually should be pretty easy. Maybe in this way you can ensure that the entire single lvm appears before proceeding.
Just a thought. I am unsure if there are other reasons for keeping the layout separate, but it sounds like this is just something you did on setup and have since then never had the motivation to change. So maybe thus might be the motivation you need.
Right, why not? My layout consists of 3 vgs with 1 lv each, vg-root (mounted at /), vg-home (/home) with random data and vg-data (/home/data) with "important" data. I split root and home out of no particular reason, I thought it might be handy to have them separated and, through LVM, always extendable. /home/data is the raid1 device with data not to be lost. I am by no means an expert in partitioning or LVM, is it possible to combine those into a single mapped device while making sure /home/data refers to the raid device?
BTW, have you tried building the lvm2/device-mapper packages with the old setup. Lvmetad is cool, but not so cool that it is worth a shoddy boot process IMO.
I'm sorry, I have no idea what you mean. Is "old setup" the one before the update (old lvm package, old kernel, old ...)?
]]>So why not combine the two lvm volumes, and then just make sure that the right volumes are dedicated to the right devices. This is actually totally possible with lvm and actually should be pretty easy. Maybe in this way you can ensure that the entire single lvm appears before proceeding.
Just a thought. I am unsure if there are other reasons for keeping the layout separate, but it sounds like this is just something you did on setup and have since then never had the motivation to change. So maybe thus might be the motivation you need.
BTW, have you tried building the lvm2/device-mapper packages with the old setup. Lvmetad is cool, but not so cool that it is worth a shoddy boot process IMO.
]]>resi wrote:I didn't know that hook, but will give it a try when I'm back home. I think I have to patch it though, 'cause I'm waiting for more than 1 device.
I don't think so, as the device you are waiting for it a virtual device, but still a device. Therefore having it wait for /dev/mapper/volume-name should work just fine, as it is simply waiting for the thing to appear in /dev.
I understand that, but I'm waiting for 2 logical volumes (dm-1 and dm-2). I don't remember exactly why I chose that setup, but I think I used it to have a logical volume with just 1 physical partition and then let LVM2 mirror this partition on a new disc.
Offtopic:
Funny thing, I'm going the other direction. Not on the machine with the LVM2/udev issues, but on my notebook (btrfs waking up the disc way too often, resulting in poor battery lifetime) and my machine at work, where I moved to ext4+LVM2 just yesterday. btrfs is very nice when looking at the features, but unbearable on a desktop running firefox, thunderbird and db related stuff (leafnode and recoll comes to my mind). I've been running btrfs on this machine for more than a year now and got tired of waiting for firefox/thunderbird taking several minutes to be usable (after booting) and the generally high I/O waiting time (where iotop always identified btrfs processes as cause). Now my desktop with firefox, thunderbird and some other tools starts in only few seconds .Yeah, see I have a SSD... or rather three. So not only does it work incredibly fast, but by using the ssd mount option it tries to aggressively group write commands (amongst some other thigns I think).
Ok, btrfs on SSDs is probably another thing .
]]>I didn't know that hook, but will give it a try when I'm back home. I think I have to patch it though, 'cause I'm waiting for more than 1 device.
I don't think so, as the device you are waiting for it a virtual device, but still a device. Therefore having it wait for /dev/mapper/volume-name should work just fine, as it is simply waiting for the thing to appear in /dev.
Offtopic:
Funny thing, I'm going the other direction. Not on the machine with the LVM2/udev issues, but on my notebook (btrfs waking up the disc way too often, resulting in poor battery lifetime) and my machine at work, where I moved to ext4+LVM2 just yesterday. btrfs is very nice when looking at the features, but unbearable on a desktop running firefox, thunderbird and db related stuff (leafnode and recoll comes to my mind). I've been running btrfs on this machine for more than a year now and got tired of waiting for firefox/thunderbird taking several minutes to be usable (after booting) and the generally high I/O waiting time (where iotop always identified btrfs processes as cause). Now my desktop with firefox, thunderbird and some other tools starts in only few seconds .
Yeah, see I have a SSD... or rather three. So not only does it work incredibly fast, but by using the ssd mount option it tries to aggressively group write commands (amongst some other thigns I think).
]]>If you think it is a race condition (which it very very well may be), have you thought about trying to use the mkinitcpio "sleep" hook? There is the parameter "sleepdevice=<device>" which might just work, as it simply pauses the initramfs process until the device is found. I think if you stick it in between the lvm2 hook and the fsck hook somewhere, maybe it will actually have it wait until the initramfs finds the device.
I didn't know that hook, but will give it a try when I'm back home. I think I have to patch it though, 'cause I'm waiting for more than 1 device.
Having said all that, I will tell you that because of all this craziness, I have moved away from using LVM2. This is unfortunate because I really liked LVM2 and until this paritcular update, it was incredibly reliable. So now I am using btrfs... and TBH, I love it. It may not be quite as performance oriented as the ext4 filesystem, but it gives me all the features of LVM2 and a lot more. I still kind of miss LVM2 though.
Offtopic:
Funny thing, I'm going the other direction. Not on the machine with the LVM2/udev issues, but on my notebook (btrfs waking up the disc way too often, resulting in poor battery lifetime) and my machine at work, where I moved to ext4+LVM2 just yesterday. btrfs is very nice when looking at the features, but unbearable on a desktop running firefox, thunderbird and db related stuff (leafnode and recoll comes to my mind). I've been running btrfs on this machine for more than a year now and got tired of waiting for firefox/thunderbird taking several minutes to be usable (after booting) and the generally high I/O waiting time (where iotop always identified btrfs processes as cause). Now my desktop with firefox, thunderbird and some other tools starts in only few seconds .