You are not logged in.
I have a ZFS backup system which wasn't updated for a long time because it was working perfectly and there was no reason to bother. After a recent upgrade, though, my ZFS datasets are no longer automatically mounted when I reboot. Unfortunately, the services don't indicate any error messages, and I can run
# zfs mount -a
after the system is booted with no problem. Some particulars:
[root@elephant etc]# uname -a
Linux elephant 4.16.11-1-ARCH #1 SMP PREEMPT Tue May 22 21:40:27 UTC 2018 x86_64 GNU/Linux
[root@elephant etc]# pacman -Q | grep spl
spl-dkms 0.7.9-1
spl-utils 0.7.9-1
[root@elephant etc]# pacman -Q | grep zfs
zfs-dkms 0.7.9-1
zfs-utils 0.7.9-1
[root@elephant etc]# zfs get mountpoint backup/www
NAME PROPERTY VALUE SOURCE
backup/www mountpoint /backup/www default
[root@elephant etc]# zfs get mountpoint backup/data
NAME PROPERTY VALUE SOURCE
backup/data mountpoint /backup/data default
[root@elephant etc]# zfs get mountpoint backup/metadata
NAME PROPERTY VALUE SOURCE
backup/metadata mountpoint /backup/metadata default
Notice in particular that the zfs-mount service seems to be perfectly happy:
[root@elephant ~]# systemctl -l status zfs*
● zfs.target - ZFS startup target
Loaded: loaded (/usr/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
Active: active since Mon 2018-05-28 15:30:18 CDT; 1min 32s ago
May 28 15:30:18 elephant systemd[1]: Reached target ZFS startup target.
● zfs-import-cache.service - Import ZFS pools by cache file
Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2018-05-28 15:30:18 CDT; 1min 32s ago
Process: 659 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=0/SUCCESS)
Process: 656 ExecStartPre=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
Main PID: 659 (code=exited, status=0/SUCCESS)
May 28 15:30:12 elephant systemd[1]: Starting Import ZFS pools by cache file...
May 28 15:30:18 elephant systemd[1]: Started Import ZFS pools by cache file.
● zfs-zed.service - ZFS Event Daemon (zed)
Loaded: loaded (/usr/lib/systemd/system/zfs-zed.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2018-05-28 15:30:18 CDT; 1min 32s ago
Docs: man:zed(8)
Main PID: 1586 (zed)
Tasks: 3 (limit: 4915)
Memory: 5.6M
CGroup: /system.slice/zfs-zed.service
└─1586 /usr/bin/zed -F
May 28 15:30:18 elephant systemd[1]: Started ZFS Event Daemon (zed).
May 28 15:30:18 elephant zed[1586]: ZFS Event Daemon 0.7.9-1 (PID 1586)
May 28 15:30:18 elephant zed[1586]: Processing events since eid=0
May 28 15:30:18 elephant zed[1591]: eid=1 class=history_event pool_guid=0x7314E37F1A1C0088
May 28 15:30:18 elephant zed[1593]: eid=2 class=config_sync pool_guid=0x7314E37F1A1C0088
May 28 15:30:18 elephant zed[1595]: eid=3 class=pool_import pool_guid=0x7314E37F1A1C0088
May 28 15:30:18 elephant zed[1618]: eid=5 class=config_sync pool_guid=0x7314E37F1A1C0088
● zfs-mount.service - Mount ZFS filesystems
Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
Active: active (exited) since Mon 2018-05-28 15:30:12 CDT; 1min 38s ago
Process: 657 ExecStart=/usr/bin/zfs mount -a (code=exited, status=0/SUCCESS)
Main PID: 657 (code=exited, status=0/SUCCESS)
May 28 15:30:12 elephant systemd[1]: Starting Mount ZFS filesystems...
May 28 15:30:12 elephant systemd[1]: Started Mount ZFS filesystems.
[root@elephant ~]#
However:
[root@elephant ~]# zfs mount
[root@elephant ~]#
[root@elephant ~]# zfs mount -a
[root@elephant ~]# zfs mount
backup /backup
backup/data /backup/data
backup/metadata /backup/metadata
backup/www /backup/www
I've run out of time available to deal with this and am just going to switch all the datasets over to legacy mount, but I'm still curious why this worked for zfs 0.6.x but isn't working for 0.7.9 -- possibly some kind of systemd incompatibility?
Last edited by pgoetz (2018-05-29 15:49:38)
Offline
Offline
Although I found the answer on reddit, this basically resolved my issue. I didn't have zfs-import.target enabled. I'm not even sure this unit file existed when I originally set this system up with ZFS 0.6.x. All it took to fix this system was
# systemctl enable zfs-import.target
I spent hours of googling looking for a solution, but didn't know to include zfs-import.target in the search, which yields answers immediately.
Offline
Not sure, but maybe you could try avoid the issue and create a drop in systemd service that adds the mount command?
I thought about this, but none of the service files were showing a failed state. Usually service file modifications are needed for service initialization failures.
Offline