You are not logged in.

#1 2016-06-09 16:49:56

predmijat
Member
Registered: 2014-09-30
Posts: 39

[SOLVED] ZFS doesn't autostart

Hi,

I've updated the system today - linux, linux-headers and zfs stuff:

local/linux 4.6.1-2 (base)
local/linux-headers 4.6.1-2
local/spl-linux-git 0.6.5_r62_g16fc1ec_4.6.1_2-1 (archzfs-linux-git)
local/spl-utils-linux-git 0.6.5_r62_g16fc1ec_4.6.1_2-1 (archzfs-linux-git)
local/zfs-linux-git 0.6.5_r304_gf74b821_4.6.1_2-1 (archzfs-linux-git)
local/zfs-utils-linux-git 0.6.5_r304_gf74b821_4.6.1_2-1 (archzfs-linux-git)

After reboot:

# systemctl status zfs.target
● zfs.target - ZFS startup target
Loaded: loaded (/usr/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
Active: active since Thu 2016-06-09 11:05:00 CEST; 7min ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.

# zpool status
no pools available

I have to manually run "systemctl start zfs-import-cache.service" and after that "zfs mount -a". I've then tried to recreate the cachefile, but after reboot, I still have to run the two commands I mentioned...

I've checked my backups and zfs-import-cache.service is different. I'm guessing that is the reason. Anyone else experiencing this? What is the proper way to fix this?

Thanks!

Last edited by predmijat (2016-06-10 10:34:32)

Offline

#2 2016-06-09 16:56:23

Slithery
Administrator
From: Norfolk, UK
Registered: 2013-12-01
Posts: 5,776

Re: [SOLVED] ZFS doesn't autostart


No, it didn't "fix" anything. It just shifted the brokeness one space to the right. - jasonwryan
Closing -- for deletion; Banning -- for muppetry. - jasonwryan

aur - dotfiles

Offline

#3 2016-06-09 17:51:47

drozdu
Member
Registered: 2012-02-20
Posts: 13

Re: [SOLVED] ZFS doesn't autostart

I had exactly the same.
Solved by moving from zfs-linux-git to zfs-linux (remember to remove zfs-linux-git , zfs-utils-linux-git, spl-linux-git and spl-utils-linux-git prior to installing zfs-linux) and importing pools.

Offline

#4 2016-06-10 07:20:32

predmijat
Member
Registered: 2014-09-30
Posts: 39

Re: [SOLVED] ZFS doesn't autostart

Nope, didn't help.

drozdu wrote:

I had exactly the same.
Solved by moving from zfs-linux-git to zfs-linux (remember to remove zfs-linux-git , zfs-utils-linux-git, spl-linux-git and spl-utils-linux-git prior to installing zfs-linux) and importing pools.

I'd like to know if there's a proper solution for this one, but I'll test your solution too, thanks smile

Moving to zfs-linux didn't help either. Something else is bugging me, and I can't figure out what...
I removed zfs-linux-git (along with utils, spl...), installed zfs-linux (along with utils, spl...), did a "zpool import storage", created a fresh cache file, rebooted, ZFS doesn't start.

Now what sad

edit: moving to zfs-linux disabled zfs.target :s after enabling it again, ZFS starts on boot.
I'll mark it as solved, even though zfs-linux-git probably isn't working yet smile

Last edited by predmijat (2016-06-10 10:34:17)

Offline

#5 2016-06-10 17:34:20

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

Im having this issue, just didnt want to be the guy that posted yet another zfs issue(initially). 

I dont know if anybody else experienced this, but first i tried to update 3 days ago and this was broken(zfs errored out, couldn't finish my update), i had to hold off, tried dgain yesterday and zfs updated without a hitch, then i have this issue on this topic..

remove the 4 old zfs binary's, Just switched to zfs-linux, rebooted, disabled/re-enabled target just in case, rebooted, it found, and imorted my pool no problems. 

That indeed solves this topic!

Now i just need to understand this message better

$ zpool status
  pool: san
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 7h30m with 0 errors on Sat Jun  4 03:00:58 2016
Aborted (core dumped)

If anyone makes a post on that, please link it here :-)

Last edited by wolfdogg (2016-06-10 17:37:59)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#6 2016-06-10 19:03:38

drozdu
Member
Registered: 2012-02-20
Posts: 13

Re: [SOLVED] ZFS doesn't autostart

wolfdogg wrote:

Now i just need to understand this message better

$ zpool status
  pool: san
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 7h30m with 0 errors on Sat Jun  4 03:00:58 2016
Aborted (core dumped)

About "Aborted (core dumped)": see my post

About "Some supported features ...": you have upgraded ZFS packages, but your pool is still in older version, that is does not support all features (use 'zpool get all <pool> |grep feature@' to see them), so you should perform 'zpool upgrade <pool>', but you will not be able to downgrade ZFS after that.

Offline

#7 2016-06-10 21:32:16

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

Im trying to wrap my head around this. I do see what appears to be what you mentioned as being invalid naming as wel on the zpools.  and i do 'only' see the word

Aborted

at the end when running as sudo, which i did at first

$ sudo zpool status
  pool: san
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 7h30m with 0 errors on Sat Jun  4 03:00:58 2016
Aborted

when ran as root, yeah i do see a bit more detail

# zpool status
  pool: san
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 7h30m with 0 errors on Sat Jun  4 03:00:58 2016
Aborted (core dumped)
# zpool status -g
  pool: san
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
        still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(5) for details.
  scan: scrub repaired 0 in 7h30m with 0 errors on Sat Jun  4 03:00:58 2016
config:

        NAME                    STATE     READ WRITE CKSUM
        san                     ONLINE       0     0     0
          2919623713328565230   ONLINE       0     0     0
          11804841023597111856  ONLINE       0     0     0
          3127014554633437690   ONLINE       0     0     0

errors: No known data errors
# lsblk -f
NAME   FSTYPE     LABEL      UUID                                 MOUNTPOINT
sda
├─sda1
├─sda2 swap                  66431d3d-2445-4a57-8852-788663a1f87d [SWAP]
├─sda3 ext3       root       1a0461bd-92d6-4230-b3e1-7cea37d58683 /
├─sda4 ext3       home       57b2807e-852f-4600-b239-598a1ec98806 /home
└─sda5 ext2       gpt_backup 47cbe2f2-25cd-4a5e-835a-21ad137736cc
sdb
├─sdb1 zfs_member san        11416166127217488013
└─sdb9
sdc    zfs_member pool       2882362132597796888
├─sdc1 zfs_member san        11416166127217488013
└─sdc9 zfs_member pool       2882362132597796888
sdd
├─sdd1 zfs_member san        11416166127217488013
└─sdd9

Ok, so thats being caused because its not having access to this new feature?  Which once upgraded

zpool upgrade

i cant access it from other systems that use older zfs technology?  i.e. if the drives were to be moved to a new / or reinstalled architecture or what?  I guess im having a hard time finding a scenario in my head as to what "software" i might be using that doesn't support this feature(what features specifically?)

If i understand that right, and being i only use arch64 bit architecture, the only systems that access this data is via either windows network share via samba on the system that has the zfs installation on it, or the localhost itself.  So, correct me if i'm wrong, but there should be no other "software"i need to worry about in this case right?

I want to just do the upgrade, but i surely dont want to have to recreate the datasets and repopulate them.

Last edited by wolfdogg (2016-06-10 21:43:43)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#8 2016-06-12 16:17:36

Baba Tong
Member
Registered: 2013-06-22
Posts: 12

Re: [SOLVED] ZFS doesn't autostart

To come back to the original issue in the OP: I had the exact same problem. It looks like the systemd files may have changed the way the kernel module is loaded.

For me it was a simple

# systemctl enable zfs-import-cache.service

that fixed it. (Should've been common sense really, since this is what OP did manually after every boot...). I'll add a comment about this to the wiki.

Last edited by Baba Tong (2016-06-12 16:18:31)

Offline

#9 2016-10-27 23:35:05

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

Had that issue once again, the pools weren't auto starting. Still working nicely

# systemctl enable zfs-import-cache.service && reboot
wolfdogg@falcon ~$ systemctl status zfs.target
— zfs.target - ZFS startup target
   Loaded: loaded (/usr/lib/systemd/system/zfs.target; enabled; vendor preset: enabled)
   Active: active since Thu 2016-10-27 16:31:47 PDT; 1min 2s ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
wolfdogg@falcon ~$ systemctl status zfs-import-cache
— zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: active (exited) since Thu 2016-10-27 16:32:01 PDT; 1min 5s ago
  Process: 314 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=0/SUCCESS)
  Process: 311 ExecStartPre=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
 Main PID: 314 (code=exited, status=0/SUCCESS)
    Tasks: 0 (limit: 4915)
   CGroup: /system.slice/zfs-import-cache.service

Oct 27 16:31:56 falcon systemd[1]: Starting Import ZFS pools by cache file...
Oct 27 16:32:01 falcon systemd[1]: Started Import ZFS pools by cache file.

The wiki is kinda painful, i just come here now.  Maybe the wikis can use splitting-off, or decluttered.

Last edited by wolfdogg (2016-11-24 21:41:06)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#10 2016-11-07 00:02:13

jskier
Member
From: Minnesota, USA
Registered: 2003-07-30
Posts: 383
Website

Re: [SOLVED] ZFS doesn't autostart

I still can't get this to autostart, need to import every time. zfs-import-cache doesn't seem to work either.


--
JSkier

Offline

#11 2016-11-07 21:12:47

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

Me neither now, ill have to go through it again. Last update, or the fact that i just rebuilt my array has it not working on boot again.  The pool shows loaded on status, after reboot, but the data folder isnt mounting, and the only way to get it back is to zfs export, then zfs import.

Last edited by wolfdogg (2016-11-07 22:09:46)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#12 2016-11-08 18:31:34

Koopa
Member
Registered: 2012-07-20
Posts: 19

Re: [SOLVED] ZFS doesn't autostart

What other zfs related services do you have enabled? Did you see this? https://github.com/archzfs/archzfs/issues/72

Offline

#13 2016-11-23 17:34:34

pgoetz
Member
From: Austin, Texas
Registered: 2014-02-21
Posts: 341

Re: [SOLVED] ZFS doesn't autostart

I figured out what my problem is -- see my follow up comment below

Working with an up to date Arch install and zfs-dkms  0.6.5.8-2, I have not been able to get pools to be imported automatically on reboot.   Reading through how the systemd service files have been changed for version 0.6.5.8 as described here, I manually enabled zfs-import-cache and zfs-mount:

systemctl enable zfs-import-cache
systemctl enable zfs-mount

However, this didn't work, and the zfs-import-cache service fails to start on boot:

[root@elephant ~]# systemctl -l status zfs-import-cache 
● zfs-import-cache.service - Import ZFS pools by cache file
   Loaded: loaded (/usr/lib/systemd/system/zfs-import-cache.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2016-11-23 01:35:32 CST; 4h 59min ago
  Process: 1618 ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN (code=exited, status=1/FAILURE)
  Process: 1616 ExecStartPre=/sbin/modprobe zfs (code=exited, status=0/SUCCESS)
 Main PID: 1618 (code=exited, status=1/FAILURE)

Nov 23 01:35:27 elephant systemd[1]: Starting Import ZFS pools by cache file...
Nov 23 01:35:32 elephant zpool[1618]: cannot import 'backup': one or more devices is currently unavailable
Nov 23 01:35:32 elephant systemd[1]: zfs-import-cache.service: Main process exited, code=exited, status=1/FAILURE
Nov 23 01:35:32 elephant systemd[1]: Failed to start Import ZFS pools by cache file.
Nov 23 01:35:32 elephant systemd[1]: zfs-import-cache.service: Unit entered failed state.
Nov 23 01:35:32 elephant systemd[1]: zfs-import-cache.service: Failed with result 'exit-code'.

The cryptic systemd error log message is a bit frustrating, since it's not at all clear why the service fails to start.  Presumably it's some kind of timing problem; e.g. in the past this has been caused by trying to load the service before the zfs target is available.

Importing the pool by hand always works:

[root@elephant ~]# ls /backup
[root@elephant ~]#
[root@elephant ~]# zpool import backup
[root@elephant ~]# ls /backup
Documents  Multimedia
[root@elephant ~]#

Finally, after a manual start of zfs-import-cache.service, I think all the zfs services I need are enabled:

[root@elephant ~]# systemctl list-unit-files | grep zfs
zfs-import-cache.service                                               enabled  
zfs-import-scan.service                                                disabled 
zfs-mount.service                                                      enabled  
zfs-share.service                                                      disabled 
zfs-zed.service                                                        disabled 
zfs.target                                                             enabled  

Last edited by pgoetz (2016-11-27 15:03:38)

Offline

#14 2016-11-24 21:48:24

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

Koopa wrote:

What other zfs related services do you have enabled? Did you see this? https://github.com/archzfs/archzfs/issues/72

Thank you, i hadnt seen that.  I forgot to check for zfs-zed, but i had never manually added that i know. 

$ sudo systemctl status zfs-mount
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
$ sudo systemctl enable zfs-mount
Created symlink /etc/systemd/system/zfs-share.service.wants/zfs-mount.service → /usr/lib/systemd/system/zfs-mount.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-mount.service → /usr/lib/systemd/system/zfs-mount.service.
$ sudo systemctl status zfs-mount
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/usr/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
$ sudo systemctl status zfs-share
● zfs-share.service - ZFS file system shares
   Loaded: loaded (/usr/lib/systemd/system/zfs-share.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
$ sudo systemctl enable zfs-share
Created symlink /etc/systemd/system/zfs.target.wants/zfs-share.service → /usr/lib/systemd/system/zfs-share.service.
$ sudo systemctl enable zfs-zed
Created symlink /etc/systemd/system/zed.service → /usr/lib/systemd/system/zfs-zed.service.
Created symlink /etc/systemd/system/zfs.target.wants/zfs-zed.service → /usr/lib/systemd/system/zfs-zed.service.
$ sudo systemctl status zfs-zed
● zfs-zed.service - ZFS Event Daemon (zed)
   Loaded: loaded (/usr/lib/systemd/system/zfs-zed.service; enabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:zed(8)

Rebooting now:
And we have ZFS! Solved for me!

Last edited by wolfdogg (2016-11-24 21:56:03)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#15 2016-11-27 15:12:54

pgoetz
Member
From: Austin, Texas
Registered: 2014-02-21
Posts: 341

Re: [SOLVED] ZFS doesn't autostart

I was having problems with getting the zpool to mount automatically on boot.  zfs-import-cache.service was enabled, but was failing on boot because the command:

/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN

was failing.  The problem was that I had created the zpool using the device names:

  # zpool create -f -o ashift=12  backup \
                         raidz2 /dev/sd[a-h] \
                         raidz2 /dev/sd[i-p] \
                         raidz2 /dev/sd[q-x]

deleting the zpool and recreating the zool using disk id's (/dev/disk/by-id/ata*) resolved this issue.

BTW, it turns out that you can change from device names to id's by exporting and re-importing the zpool:

  # zpool export backup
  # zpool import -d /dev/disk/by-id backup

This will switch all /dev/sdx drives to the full ID.

Offline

#16 2016-11-28 21:25:50

wolfdogg
Member
From: Portland, OR, USA
Registered: 2011-05-21
Posts: 545

Re: [SOLVED] ZFS doesn't autostart

pgoetz wrote:

BTW, it turns out that you can change from device names to id's by exporting and re-importing the zpool:

  # zpool export backup
  # zpool import -d /dev/disk/by-id backup

This will switch all /dev/sdx drives to the full ID.

Very good to know, i didnt know that.  I swear by device by label for ZFS, makes it easier to readily identify and keep track of.  But yeah, its a bit more painful but by-id is another proper way. Using device letters/names will get you into big trouble after reboot often, its never worth it IMO.

Last edited by wolfdogg (2016-11-28 21:29:48)


Node.js, PHP Software Architect and Engineer (Full-Stack/DevOps)
GitHub  | LinkedIn

Offline

#17 2016-11-29 11:12:59

pgoetz
Member
From: Austin, Texas
Registered: 2014-02-21
Posts: 341

Re: [SOLVED] ZFS doesn't autostart

wolfdogg wrote:

Very good to know, i didnt know that.  I swear by device by label for ZFS, makes it easier to readily identify and keep track of.  But yeah, its a bit more painful but by-id is another proper way. Using device letters/names will get you into big trouble after reboot often, its never worth it IMO.

So, you create a label for each disk using something like

e2label /dev/sdx <label>

and then use these labels to define the pool?

This is probably a smart thing to do, as you likely make the labels locational identifiers in your chassis so that it's easy to find and replace failed disks.  I'm still not sure what procedure I will use to find failed disks.  Hopefully the Supermicro chassis will give me a red light indicator, or something, similar to the way the LSI RAID controllers work.

Offline

Board footer

Powered by FluxBB