I have added a segment over creating datasets and setting attributes to datasets to the ZFS Arch-wiki. I wouldn't want this to happen to someone else.
]]>1) make a pool.
2) make individual spdatasets if you want to apply snapshots, quotas, etc to them.
3) optional depending on if your have data in the pool, but not in the datasets: move the data into the data sets.
I do not understand when you rebooted and were unable to mount the pool.
]]>zfs destroy storage/recordings
I'll keep looking on this as your advice did in theory work. I have something to go on now.
]]>EDIT: After a reboot, the ZFS will no longer mount.
[server@server ~]$ sudo systemctl status zfs.service
zfs.service - Zettabyte File System (ZFS)
Loaded: loaded (/etc/systemd/system/zfs.service; enabled)
Active: failed (Result: exit-code) since Tue 2013-10-29 08:38:34 CET; 8min ago
Docs: man:zfs(8)
man:zpool(8)
Process: 9764 ExecStart=/usr/bin/zfs mount -a (code=exited, status=1/FAILURE)
Process: 9761 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=0/SUCCESS)
Process: 9759 ExecStart=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
Main PID: 9764 (code=exited, status=1/FAILURE)
Oct 29 08:38:33 server zpool[9761]: no pools available to import
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/music': directory is not empty
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/pictures': directory is not empty
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/recordings': directory is not empty
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/unshared': directory is not empty
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/users': directory is not empty
Oct 29 08:38:34 server zfs[9764]: cannot mount '/media/storage/videos': directory is not empty
Oct 29 08:38:34 server systemd[1]: zfs.service: main process exited, code=exited, status=1/FAILURE
Oct 29 08:38:34 server systemd[1]: Failed to start Zettabyte File System (ZFS).
Oct 29 08:38:34 server systemd[1]: Unit zfs.service entered failed state.
# zfs create storage/recordings
Now you can apply a quota to it (I think).
]]>I've tried to google this one but I'm coming up empty. I recently setup a ZFS raidz pool with 6 drives and so far happy and working fine:
[root@server server]# zpool status
pool: storage
state: ONLINE
scan: scrub repaired 43.5M in 17h40m with 0 errors on Thu Oct 24 18:18:55 2013
config:
NAME STATE READ WRITE CKSUM
storage ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
sdf ONLINE 0 0 0
errors: No known data errors
The pool "storage" is mounted on /media/storage
Enter mythtv which offers no internal method to limit the amount of disk space it will use except with physical drive capacity limits or setting a quota for the recordings directory.
I ran accross a few ZFS sites which explain the quota function built into ZFS. It is supposed to allow a quota on directories within the raidz array. Alas, I get the following:
root@server server]# zfs set quota=3T storage/recordings
cannot open 'storage/recordings': dataset does not exist
and
root@server server]# zfs set quota=3T storage/media/storage/recordings
cannot open 'storage/media/storage/recordings': dataset does not exist
I have been relying mostly on this site:
http://www.tech-recipes.com/rx/1404/zfs … tem-quota/
So my question would be: Does anyone know how to make this work?
Otherwise there is a generic linux quota that uses fstab but I'm not sure if that will work on a directory in a ZFS pool. Also fstab is not used to mount a ZFS pool.
Thank-you!
]]>