You are not logged in.
I have been using zfs for a couple of days now.
It's only for storage purposes, not for root system.
I installed it, and enabled zfs.target for startup, but since a couple of reboots ago, I noticed zfs-import-cache.service fails resulting in sometimes, the pool not getting mounted.
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; static)
Active: failed (Result: exit-code) since sáb 2014-07-05 23:42:21 CEST; 44min ago
Process: 174 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
Main PID: 174 (code=exited, status=1/FAILURE)
jul 05 23:42:21 7thHeaven zpool[174]: Unable to open /dev/zfs: No such file or directory.
jul 05 23:42:21 7thHeaven zpool[174]: Verify the ZFS module stack is loaded by running '/sbin/modprobe zfs'.
jul 05 23:42:21 7thHeaven systemd[1]: zfs-import-cache.service: main process exited, code=exited, status=1/FAILURE
jul 05 23:42:21 7thHeaven systemd[1]: Failed to start Import ZFS pools by cache file.
jul 05 23:42:21 7thHeaven systemd[1]: Unit zfs-import-cache.service entered failed state.
Any idea why would this happen?
Thanks.
Offline
My guess would be that the zfs-import-cache.service is running before the zfs kernel modules have created the /dev/zfs device, or the zfs kernel modules are not being loaded.
You can check if the modules are loaded with lsmod.
If the modules are loaded, is there a /dev/zfs device?
If there is the /dev/zfs device, it would point to the timing issue being most likely.
If it is the timing issue you could try the following:
1. Create an executable script
/etc/zfs/waitforzfsdevice
with the following contents:
#!/bin/bash
for i in {1..10}; do
[[ -e /dev/zfs ]] && exit 0
sleep 1
done
[[ -e /dev/zfs ]]
2. Create the directory
/etc/systemd/system/zfs-import-cache.service.d
3. Create the file
/etc/systemd/system/zfs-import-cache.service.d/waitforzfsdevice.conf
with the following contents:
[Service]
ExecStartPre=/etc/zfs/waitforzfsdevice
Last edited by ukhippo (2014-07-09 10:00:00)
Offline
Yes, that seems to be the problem. zfs-import-cache.service is running before the zfs kernel modules have created the /dev/zfs device.
I added TimeoutStartSec=10 to the service, seemed to work, but if you instantly boot and don't let the devices be started and recognized, it would fail to load the cache again.
I'll try your way, seems safer.
BTW, I'm trying to setup zed so it would send me an email on every event, but I don't seem to get it working. I installed and started postfix, but I still don't really know how to configure it so the mails get sent. Any tips?
Thanks!!!
Offline
I've got a related problem I just posted about yesterday, I wonder if this busywait tactic would work in my scenario as well - although isn't there a way to do this with Require/After instead of an ExecStartPre wait script?
Offline
BTW, I'm trying to setup zed so it would send me an email on every event, but I don't seem to get it working. I installed and started postfix, but I still don't really know how to configure it so the mails get sent. Any tips?
You need to create a script in /etc/zfs/zed.d/ - see the zed man page.
Use the existing scripts in that dir for examples (e.g. scrub.finish-email.sh).
The existing scripts use /etc/zfs/zed.d/zed.rc; in that you need to set “ZED_EMAIL” to the email address to mail to in order to make the existing mailer scripts do something. Your script could instead hard code the email address.
I've got a related problem I just posted about yesterday, I wonder if this busywait tactic would work in my scenario as well - although isn't there a way to do this with Require/After instead of an ExecStartPre wait script?
If you look at zfs.target et al, you'll see they already use Require/After/Before to order things. Xi0N's problem is not the ordering, but that the kernel module hasn't always created the zfs device when systemd runs the .service.
I use ZFS as my root, and the zfs hook runscript has a similar wait in it - so for me the zfs device exists by the time systemd runs the .service.
I don't know if your problem is the slowness of the zfs kernel modules or something else. My first port of call would be to check the journal for errors.
Offline
I use ZFS as my root, and the zfs hook runscript has a similar wait in it - so for me the zfs device exists by the time systemd runs the .service.
I don't know if your problem is the slowness of the zfs kernel modules or something else. My first port of call would be to check the journal for errors.
I also use ZFS as root. After ugrading to 3.15 I also do have problem during the boot.
Can you please show me your zfs hook runscript. Where is the script located ?
Offline
I use the hook script provided by “zfs-utils-git” from demizer's repo. It's installed to: /usr/lib/initcpio/hooks/zfs
Offline