You are not logged in.
[matthew@lunas ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
sdb 8:16 0 2.7T 0 disk
sdc 8:32 0 2.7T 0 disk
sdd 8:48 0 2.7T 0 disk
sde 8:64 0 2.7T 0 disk
sdf 8:80 0 931.5G 0 disk
├─sdf1 8:81 0 8G 0 part
└─sdf2 8:82 0 923.5G 0 part /
[matthew@lunas ~]$ sudo zpool create -o ashift=12 nas raidz1 /dev/disk/by-id/ata-HGST_*
cannot create 'nas': one or more devices is currently unavailable
(note i've tried setting all id's manually in the command same issue)
Any ideas?
Offline
Dunno for sure... why not use an other identifier?
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Well mainly because of whats explained here: (which the arch wiki page on zfs links to btw)
http://zfsonlinux.org/faq.html#WhatDevN … tingMyPool
So looks like for me on arch the only options are using /dev/sd* or by-id
I don't want to have to deal with the sd* names changing on me, thats why (i'd also like my zfs pool to work even if i physically moved the drives around to different sata ports)
Technically I could create a single partition on the drives and then set a label on that partition the maybe use /dev/disk/by-partlabel(wiki mentions this), but apparently doing that zfs wont use the drives cache or something. Plus i'm not sure if that would even work, it could end up having this same issue....
Offline
What about the root cause instead of thinking about weird workarounds? How about that mounted rootfs / should sdf become part of the zpool as well?
1000
Offline
What about the root cause instead of thinking about weird workarounds? How about that mounted rootfs / should sdf become part of the zpool as well?
Please explain to me how is that the root cause, and what workarounds? Do you even know anything about ZFS? First that is a 1tb hdd. It could not reasonably added to a zfs pool with 3tb hdds....
In addition I had originally looked into doing putting the root on zfs, but found next to no information on how to go about booting off of a zfs pool in linux...
Sorry but your not making much sense here...
edit:
Plus booting off of a zfs pool if one succeeded, would not have quite the performance of using the full drives. Beacause of the probable need of partitioning the drives, and the cache not being used.
Last edited by dfanz0r (2015-10-10 00:18:41)
Offline
Looking on the zfsonlinux github, this is a reported issue so...
https://github.com/zfsonlinux/zfs/issues/3708
Offline
Won't the wild card pick up the partitions as well as the whole disk?
Have you tried specifying each Fisk explicitly rather than relying on the '*' wildcard?
Offline
Won't the wild card pick up the partitions as well as the whole disk?
Have you tried specifying each Fisk explicitly rather than relying on the '*' wildcard?
It would have if the drives had partitions on them. And yes i have tried specifying the drives individually.
But there is a workaround mentioned in the github issue that works fine. Basically you can export and import the zfs pool and have it swap to using id's in the process:
[matthew@lunas ~]$ sudo zpool status
pool: nas
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
nas ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
sde ONLINE 0 0 0
errors: No known data errors
[matthew@lunas ~]$ sudo zpool export nas
[matthew@lunas ~]$ sudo zpool import -d /dev/disk/by-id nas
[matthew@lunas ~]$ sudo zpool status
pool: nas
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
nas ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-HGST_HDN724030ALE640_PK2234P9HVANDY ONLINE 0 0 0
ata-HGST_HDN724030ALE640_PK2234P9J5VPVY ONLINE 0 0 0
ata-HGST_HDN724030ALE640_PK2234P9HXU5SY ONLINE 0 0 0
ata-HGST_HDN724030ALE640_PK2234P9J0ARUY ONLINE 0 0 0
ata-HGST_HDN724030ALE640_PK2234P9HXKGDY ONLINE 0 0 0
errors: No known data errors
[matthew@lunas ~]$
Last edited by dfanz0r (2015-10-15 00:38:11)
Offline