You are not logged in.
Hi!
I've just installed a fresh archlinux system with zfs-linux from archzfs repository. Root is on ext4 SSD, zfs pool /data is configured as raidz1. Nothing really going on, it'll store backup mirrors. However, I've run into a weird issue. I enabled `zfs-import-cache.service` per wiki docs, but the import fails with the following log:
...
Feb 11 16:22:34 nutcracker systemd[1]: Mount ZFS filesystems was skipped because of a failed condition check (ConditionPathIsDirectory=/sys/module/zfs).
...
Feb 11 16:22:35 nutcracker systemd-modules-load[231]: Inserted module 'zfs'
Feb 11 16:22:35 nutcracker audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Feb 11 16:22:35 nutcracker systemd[1]: Finished Load Kernel Modules.
...
After that, I see the zfs-import-cache.service failed:
[root@nutcracker ~]# systemctl status zfs-import-cache.service
* zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:zpool(8)
It appears the service is started too early, before the zfs module loads, and fails. Restarting the service manually works and imports the pool. Any idea why the service is started too early? And even if it does, it should be able to load the module on itself, or not?
Any advice is appreciated!
Cheers,
Krakonos
Offline
Had the same issue. Added zfs to /etc/mkinitcpio.conf at the end of the hooks line then ran sudo mkinitcpio -P. Pool now shows in zpool status after rebooting
Last edited by Jono4 (2022-02-21 05:05:06)
Offline
Not a fix to your issue, but you can replace zfs-import-cache with zfs-import-scan. I've been using zfs-import-scan with no issues.
Offline
Thanks for the suggestions. I returned to the machine today and finished the setup. Putting zfs into initcpio fixed that. I tried zfs-import-scan instead per the other suggestion, but had no luck there. The zfs-import-scan fails with the same message. I guess I'll try to raise the issue with the archzfs folks on github to see if this is a bug in the systemd unit design.
Cheers,
Krakonos
Offline
I solved this in 2 parts
after checking a couple of things and per https://wiki.archlinux.org/title/ZFS#Us … nt.service
Part1)
#1 followed the instructions here https://wiki.archlinux.org/title/ZFS#Us … nt.service, for zfs-mount.service and zfs-import-cache.service
paying close attention to setting up "zpool set cachefile=/etc/zfs/zpool.cache" for each pool in the system
#2 /sbin/modprobe zfs
make sure zfs is loaded
#3 dmesg | grep ZFS
make sure its loaded on boot smoothly (in my case it was not)
prior to completing steps #4 and #5 my dmesg output had errors where zfs-mount.service and zfs-import-cache.service where loading before systemd zfs startup around the 12sec mark
once steps #4 and #5 were completed the dmesg output was as follows (zfs-mount.service was loading correctly, but zfs-import-cache.service was not)
After #4 and #5 this is what I had:
[HOST username]# dmesg | grep ZFS
[ 2.468684] ZFS: Loaded module v2.1.5-1, ZFS pool version 5000, ZFS filesystem version 5
[ 15.223891] systemd[1]: Reached target ZFS startup target.
#4 seemed smart to ensure the cache import is done before the mount, since the pool list has to be populated before it can be mounted correctly.
for zfs-mount.service, I had 2 issues
(1) the zfs kernel module was not loaded before zfs-mount.service started, so I added After=systemd-modules-load.service to wait for it to load (well that as the theory, it's duplicated in step 5)
(2) I set the condition to ensure the zfs-import-cache.service was completed first
There is another thread that talks about removing the check for the zfs module "# ConditionPathIsDirectory=/sys/module/zfs", as far as I can tell this is invalid.
I went back enabled the check for the path, at the end and it all works ok (basically this test is invalid)
my understanding is that this should be resolved as long as the kernel modules are loaded before the service starts.
After=systemd-udev-settle.service
>> After=systemd-modules-load.service
>> After=zfs-import-cache.service
#5 for zfs-import-cache.service, I made sure the zfs kernel module was loaded per step #4
After=systemd-udev-settle.service
>> After=systemd-modules-load.service
After=cryptsetup.target
Part2)
These steps were not working until I followed the recommendation above re:
>> Added zfs to /etc/mkinitcpio.conf at the end of the hooks line then ran sudo mkinitcpio -P
Before this step zfe-mount.service was loading correctly but zfe-import-cache.service was not, my zpool had nothing it it
Added this step resulted in zfe-import-cache.service starting cleanly and everything mounting on boot
validating everything
[HOST username]# dmesg | grep ZFS
[ 2.468684] ZFS: Loaded module v2.1.5-1, ZFS pool version 5000, ZFS filesystem version 5
[ 15.223891] systemd[1]: Reached target ZFS startup target.
[ 15.334656] systemd[1]: Starting Mount ZFS filesystems...
[ 15.409929] systemd[1]: Finished Mount ZFS filesystems.
am running 5.15.49-1-MANJARO, and did not bother to go back and retest / rebuild from scratch but I guess I'll find out next time I upgrade the kernel/system
I'd take a rough guess that boot on ZFS wouldn't work as I've required systemd module service to load completely first.
well hope that might be useful to someone
- beast
Last edited by beaster99 (2022-07-04 17:11:20)
Offline
I got a similar issue (pool not imported on boot) but the error message showed up for the zfs-mount.service instead:
$ systemctl status zfs-mount
○ zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; preset: enabled)
Active: inactive (dead)
Condition: start condition failed at Mon 2022-08-08 01:02:52 WIB; 1min 23s ago
└─ ConditionPathIsDirectory=/sys/module/zfs was not met
Docs: man:zfs(8)
In my case, the solution was to add zfs to the end of the HOOKS= line in /etc/mkinitcpio.conf:
HOOKS=( ... zfs)
And then regenerating the ramdisk image:
# mkinitcpio -P
Should this go into the wiki? The wiki page for ZFS doesn't say anything about this, except for an encrypted pool situation.
Offline