You are not logged in.
Pages: 1
I ve done the tutorial fr installing zfs on archlinux as a root system but i still have some problem.
I've used that one :
https://wiki.archlinux.org/index.php/In … nux_on_ZFS
When i boot my computer i have the emergency bash and i must import / export my pool.
I've already done :
systemctl enable zfs.service
here is my /usr/lib/systemd/system/zfs.service :
[Unit]
Description=Zettabyte File System (ZFS)
Documentation=man:zfs(8) man:zpool(8)
DefaultDependencies=no
After=cryptsetup.target
Before=local-fs.target
Conflicts=shutdown.target umount.target
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/sbin/modprobe zfs
#ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN
#ExecStart=/usr/bin/zfs mount -a
#ExecStart=/usr/bin/zfs share -a
ExecStop=/usr/bin/swapoff -a
ExecStop=/usr/bin/zfs umount -a
ExecStop=/usr/bin/zpool export zroot
[Install]
WantedBy=local-fs.target
and here is a journalctl --unit=zfs
-- Logs begin at Sun 2013-09-15 08:49:49 CEST, end at Mon 2013-09-16 09:53:19 CEST. --
Sep 15 08:58:21 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
[1;39m-- Reboot --[0m
Sep 15 09:01:18 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
[1;39m-- Reboot --[0m
Sep 15 09:04:44 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
Sep 15 09:06:09 Kusanagi systemd[1]: Stopping Zettabyte File System (ZFS)...
[1;39m-- Reboot --[0m
Sep 15 09:08:28 Kusanagi systemd[1]: zfs.service: main process exited, code=exited, status=1/FAILURE
Sep 15 09:08:28 Kusanagi systemd[1]: Failed to start Zettabyte File System (ZFS).
Sep 15 09:08:28 Kusanagi systemd[1]: Unit zfs.service entered failed state.
[1;39m-- Reboot --[0m
Sep 15 09:33:54 Kusanagi zpool[416]: failed to open cache file: No such file or directory
Sep 15 09:33:54 Kusanagi systemd[1]: zfs.service: main process exited, code=exited, status=1/FAILURE
Sep 15 09:33:54 Kusanagi systemd[1]: Failed to start Zettabyte File System (ZFS).
Sep 15 09:33:54 Kusanagi systemd[1]: Unit zfs.service entered failed state.
[1;39m-- Reboot --[0m
Sep 15 10:15:54 Kusanagi systemd[1]: [/usr/lib/systemd/system/zfs.service:18] Missing '='.
Sep 15 10:18:34 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:32:12 Kusanagi systemd[1]: [/usr/lib/systemd/system/zfs.service:18] Missing '='.
Sep 15 10:32:18 Kusanagi systemd[1]: [/usr/lib/systemd/system/zfs.service:18] Missing '='.
Sep 15 10:37:06 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:48:19 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:49:12 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:49:33 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:49:54 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:50:22 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:51:16 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:51:44 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:51:53 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:53:38 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:53:44 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 10:56:40 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 11:00:45 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 11:05:38 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
Sep 15 11:12:32 Kusanagi systemd[1]: Cannot add dependency job for unit zfs.service, ignoring: Unit zfs.service failed to load: Bad message. See system logs and 'systemctl status zfs.service' for details.
[1;39m-- Reboot --[0m
Sep 15 22:05:17 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
[1;39m-- Reboot --[0m
Sep 16 05:55:32 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
[1;39m-- Reboot --[0m
Sep 16 07:33:39 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
and here is a systemctl status zfs.service :
systemctl status zfs.service
zfs.service - Zettabyte File System (ZFS)
Loaded: loaded (/usr/lib/systemd/system/zfs.service; enabled)
Active: active (exited) since Mon 2013-09-16 07:33:39 CEST; 2h 47min ago
Docs: man:zfs(8)
man:zpool(8)
Process: 405 ExecStart=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
Sep 16 07:33:39 Kusanagi systemd[1]: Started Zettabyte File System (ZFS).
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
and i've already done a # systemctl daemon-reload
Any idea ?
regards
Bussiere
Offline
When pasting configs, code or command output, please use [ code ] tags https://bbs.archlinux.org/help.php#bbcode
like this
It makes the code more readable and - in case of longer listings - more convenient to scroll through.
Offline
Also, in additiona to karol's always great advice, you should also search before posting. This question has come up a couple times in the past month or so, and was actually asked by someone else just two days ago… maybe even yesterday.
Offline
1st post on the forums. But I've spent *years* lurking. :-)
Maybe I looked at a different thread than WonderWoofy was referring to, but I did run into the same problem myself this week while setting up a new Arch install (zroot/ and zroot/home on ZFS). I believe the systemd init scripts for starting up and shutting down ZFS along with the kernel parameters (zfs=zroot) are not all correct in the tutorial. For me, a stop-gap solution was to append "zfs_force=1" onto my kernel command line issued from Grub (updated in /boot/grub/grub.cfg).
This is not the most elegant solution, but it works.
$ cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-linux zfs=zroot zfs_force=1
Hopefully the correct umount+export, reboot, import+mount sequence will be in our future.
Offline
I had this problem too in the beginning, but did solve it by setting the hostid like this:
~# hostid > /etc/hostid
After this I rebuilt my initrd with
~# mkinitcpio -p linux
After these two steps I could remove the zfs_force from my kernel line and it works without a problem since then.
My System: Dell XPS 13 | i7-7560U | 16GB RAM | 512GB SSD | FHD Screen | Arch Linux
My Workstation/Server: Supermicro X11SSZ-F | Xeon E3-1245 v6 | 64GB RAM | 1TB SSD Raid 1 + 6TB HDD ZFS Raid Z1 | Proxmox VE
My Stuff at Github: github
My Homepage: Seiichiros HP
Offline
I had this problem too in the beginning, but did solve it by setting the hostid like this:
~# hostid > /etc/hostid
After this I rebuilt my initrd with
~# mkinitcpio -p linux
After these two steps I could remove the zfs_force from my kernel line and it works without a problem since then.
This is the correct solution. If you run ZFS on your root filesystem the hostid wont be available at the time of mounting it. Specifying the hostid as mentioned above or as a parameter to the kernel is the correct solution. No need to edit the ZFS.service file.
I updated the wiki article on installing Arch to ZFS to help solve this confusion.
Offline
I had this problem too in the beginning, but did solve it by setting the hostid like this:
~# hostid > /etc/hostid
After this I rebuilt my initrd with
~# mkinitcpio -p linux
After these two steps I could remove the zfs_force from my kernel line and it works without a problem since then.
This solution f*******ed up my installation and now i'am have a real pain to reinstall my system because there is a version problem with the zfs module :
https://bbs.archlinux.org/viewtopic.php … 0#p1331360
And it may also have f***ed up my dd.
Bussiere
Offline
I'm sorry to hear that, but I can only say that this approach worked for me on two systems without a problem. Maybe the broken modules where already present when you did the mkinitcpio rebuild after setting the hostid? I don't have a clue how setting the hostid could kill the system. At worst it should be able to boot after re-adding the zfs_force at the kernel line.
My System: Dell XPS 13 | i7-7560U | 16GB RAM | 512GB SSD | FHD Screen | Arch Linux
My Workstation/Server: Supermicro X11SSZ-F | Xeon E3-1245 v6 | 64GB RAM | 1TB SSD Raid 1 + 6TB HDD ZFS Raid Z1 | Proxmox VE
My Stuff at Github: github
My Homepage: Seiichiros HP
Offline
I'm sorry to hear that, but I can only say that this approach worked for me on two systems without a problem. Maybe the broken modules where already present when you did the mkinitcpio rebuild after setting the hostid? I don't have a clue how setting the hostid could kill the system. At worst it should be able to boot after re-adding the zfs_force at the kernel line.
The pool is corrupted also and it doesn't want to be mount.
But it's the game, and i've made backup with snapshot on external device so, now just reinstalling would be fine.
But i have some problem just intalling zfs on a basic installation.
Btw it proves me that zfs is great for backup, and making backup often is a mandatory.
regards
Offline
Pages: 1