LVM _should_ just work. What kind of RAID are you using? dmraid is known to be problematic. Using mdadm instead should just work.
If you want to boot even if some filesystems are not mounted, this can be done by adding "nofail" to the options in /etc/fstab.
LVM doesn't work out of the box.
I've updated the systemd wiki to do a
systemctl enable lvm
to, ehm enable lvm in the emergency console. After that the system does boot. I'm not sure what the systemct enable lvm-monitoring settings are for but these don't work on my installation.
[j@janus ~]$ sudo systemctl enable lvm-monitoring
Failed to issue method call: No such file or directory
If these lines do help other users we can leave them but otherwise I suggest to remove the section on LVM as it more confused than helped me.
]]>... and now my machine doesn't boot anymore....
In order to setup systemd with LVM systemd has to be running, in order to run systemd lvm needs to be configured. That's what I read into this. (was it route-66 or catch-25, I forgot)
Chroot, configure, reboot. You need to install the LVM2 package.
LVM with Systemd | Using Hooks
Asking to be spoon fed information, and or blaming your issues on a lack of information is pretty lame.
]]>Do I understand that using systemd with LVM2 was not really tested and documented?
No. Just that the wiki got updated too quickly. It reflects the changes that went into the newest release, which is still in the testing repository (where they have been stuck for about a month). Hopefully they will move to core very soon, and all will be well. (I use lvm with systemd without problems, using testing of course).
]]>LVM _should_ just work. What kind of RAID are you using? dmraid is known to be problematic. Using mdadm instead should just work.
If you want to boot even if some filesystems are not mounted, this can be done by adding "nofail" to the options in /etc/fstab.
running a md system.
]]>So seems the answer is that the wiki is out-of-sync with reality (this time reality lagging the wiki). I advice you to either wait for lvm2 to reach [core] before doing the switch (it is still possible to switch with the current lvm2, but obviosuly the instructions are MIA) or to simply do "pacman -S testing/lvm2", which should work.
Do I understand that using systemd with LVM2 was not really tested and documented? (and before anyone get's annoyed this is really just a question). In that case I will stick with initscripts unless someone is interested in having me rambling about problems I encounter on the way to systemd-bliss.
]]>[root@janus j]# pkgfile lvm-monitoring
error: No repo files found. Please run `pkgfile --update'.
[root@janus j]# pkgfile --update
:: Updating 3 repos...
warning: download failed: ftp://mirror.leaseweb.com/archlinux/core/os/x86_64/core.files: response reading failed
error: failed to update repo: core
download complete: community [ 6.5 MiB 1369K/s 1 remaining]
download complete: extra [ 5.8 MiB 867K/s 0 remaining]
:: download complete in 6.83s < 12.3 MiB 1844K/s 2 files >
:: waiting for 1 process to finish repacking repos...
[root@janus j]# pkgfile lvm-monitoring
[root@janus j]#
[root@janus j]# pkgfile lvm-monitoring.service
error: No repo files found. Please run `pkgfile --update'.
falconindy wrote:Then it doesn't exist. The unit belongs to a package in testing:
$ pkgfile lvm-monitoring.service testing/lvm2
[root@janus j]# pkgfile lvm-monitoring.service bash: pkgfile: command not found
3 years in Arch and you don't know 'command not found' means the pkgfile package is not installed?
]]>Then it doesn't exist. The unit belongs to a package in testing:
$ pkgfile lvm-monitoring.service testing/lvm2
[root@janus j]# pkgfile lvm-monitoring.service
bash: pkgfile: command not found
$ pkgfile lvm-monitoring.service
testing/lvm2
[root@janus j]# mdadm --detail /dev/md/127_0
/dev/md/127_0:
Version : 0.90
Creation Time : Tue Dec 15 20:49:43 2009
Raid Level : raid5
Array Size : 2930287488 (2794.54 GiB 3000.61 GB)
Used Dev Size : 976762496 (931.51 GiB 1000.20 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 127
Persistence : Superblock is persistent
Update Time : Thu Dec 6 01:01:07 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 64K
UUID : fb0ea4c6:f8a5a72c:f2b17937:afd40c67
Events : 0.598315
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
2 8 32 2 active sync /dev/sdc
3 8 48 3 active sync /dev/sdd
is this something I should tell systemd?
]]>Your other thread is silly. You need the '.service' suffix on units when you aren't booting on systemd. 'systemctl enable lvm-monitoring.service' will work. systemd works just fine with LVM.
I invite you to look at https://wiki.archlinux.org/index.php/Systemctl#LVM
there is not ".service" mentioned in this wiki. So the wiki is painfully wrong or at least misleading. And don't take this personal. As I don't. My threads aren't silly. :-)
I think what I mend to tell is that a working system on LVM might (repeat, might) not work with the intend of the wiki on systemd installation.
But let's not get side-tracked.
[root@janus j]# systemctl enable lvm-monitoring.service
Operation failed: No such file or directory
If you want to boot even if some filesystems are not mounted, this can be done by adding "nofail" to the options in /etc/fstab.
]]>