You are not logged in.
I have an odd issue here, something must've happened on May 5th that now causes zfs-mount.service to fail, but my pools still get mounted correctly.
During boot zfs-mount.service fails, which causes my bind mounts to fail, which in turn causes NFS to fail, which screws up my KVMs. After it boots into Arch my pools are mounted, but I have to issue a mount -a for my bindmounts to work.
journaalctl -u zfs-mount.service shows the following:
-- Reboot --
May 05 21:03:31 nas.brandongolway.us systemd[1]: Starting Mount ZFS filesystems...
May 05 21:03:31 nas.brandongolway.us systemd[1]: Started Mount ZFS filesystems.
-- Reboot --
May 05 23:33:24 nas.brandongolway.us systemd[1]: Starting Mount ZFS filesystems...
May 05 23:33:24 nas.brandongolway.us zfs[3224]: cannot mount '/mnt/safekeeping': directory is not empty
May 05 23:33:24 nas.brandongolway.us zfs[3224]: cannot mount '/mnt/storage': directory is not empty
May 05 23:33:24 nas.brandongolway.us zfs[3224]: cannot mount '/mnt/storage/downloads': directory is not empty
May 05 23:33:24 nas.brandongolway.us zfs[3224]: cannot mount '/mnt/storage/multimedia': directory is not empty
May 05 23:33:24 nas.brandongolway.us systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
May 05 23:33:24 nas.brandongolway.us systemd[1]: Failed to start Mount ZFS filesystems.
May 05 23:33:24 nas.brandongolway.us systemd[1]: zfs-mount.service: Unit entered failed state.
May 05 23:33:24 nas.brandongolway.us systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
...
-- Reboot --
Jun 11 20:12:42 nas.brandongolway.us systemd[1]: Starting Mount ZFS filesystems...
Jun 11 20:12:42 nas.brandongolway.us zfs[4398]: cannot mount '/mnt/downloads': directory is not empty
Jun 11 20:12:43 nas.brandongolway.us zfs[4398]: cannot mount '/mnt/safekeeping': directory is not empty
Jun 11 20:12:43 nas.brandongolway.us zfs[4398]: cannot mount '/mnt/storage': directory is not empty
Jun 11 20:12:43 nas.brandongolway.us zfs[4398]: cannot mount '/mnt/storage/multimedia': directory is not empty
Jun 11 20:12:43 nas.brandongolway.us systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jun 11 20:12:43 nas.brandongolway.us systemd[1]: Failed to start Mount ZFS filesystems.
Jun 11 20:12:43 nas.brandongolway.us systemd[1]: zfs-mount.service: Unit entered failed state.
Jun 11 20:12:43 nas.brandongolway.us systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
lines 84-130/130 (END)
Deleting /etc/zfs/zpool.cache doesn't fix anything.
Offline
Not a Sysadmin issue, moving to NC...
Offline
Directory is not empty... is it?
Offline
No, the directories aren't empty because the pools are already mounted there, it looks like they are mounted twice, even though I don't have them in fstab.
Offline
No one can help with this??
Offline
there is a race condition between your bind mount and zfs mount as you noticed correctly.
I have the same problem. You can work around this problem by setting your zfs datasets as legacy mounts
and add them to /etc/fstab like other file systems. The deeper problem here is that zfs can not trigger a mount within the Kernel because vfs_mount() is GPL-licensed.
It mounts filesystem by using the userspace mount executable from the kernel. Very hacky stuff.
Offline
I think i configured a systemd service for using bind mounts together with ZFS. Use a systemd service for bind mount and zfs mount, and make the bind mount depend on systemd zfs mount and use After=zfs .
EDIT: Here it is
https://wiki.archlinux.org/index.php/ZFS#Bindmount
Last edited by teateawhy (2016-06-23 14:25:39)
Offline
Yea I've been reading the wiki, but I'm still having issues. I just got rid of my bindmounts, which resolved my issues with NFS and it doesn't matter if the service fails. I posted an issue on the ZoL github and I got a response from a Dev and he said "It just happens sometimes."
Offline
Maybe someone can write a systemd.generator(8).
So systemd can learn about those mounts.
This would solve all those problems
Offline