You are not logged in.
After upgrading to linux-4.6.1-2 'zpool status' gives me:
pool: dysk1
state: ONLINE
scan: scrub repaired 0 in 4h7m with 0 errors on Sun Jun 5 08:07:51 2016
Cancelled
and dumps core:
Stack trace of thread 6389:
#0 0x00007fe5f3436295 raise (libc.so.6)
#1 0x00007fe5f34376da abort (libc.so.6)
#2 0x00007fe5f3e2af0c zpool_vdev_name (libzfs.so.2)
#3 0x0000000000405d17 n/a (zpool)
#4 0x0000000000405e4b n/a (zpool)
#5 0x0000000000405e4b n/a (zpool)
#6 0x000000000040fe63 n/a (zpool)
#7 0x0000000000405698 n/a (zpool)
#8 0x0000000000405854 n/a (zpool)
#9 0x000000000040c696 n/a (zpool)
#10 0x00000000004051c7 n/a (zpool)
#11 0x00007fe5f3423741 __libc_start_main (libc.so.6)
#12 0x0000000000405329 n/a (zpool)
zpool status -g works:
pool: dysk1
state: ONLINE
scan: scrub repaired 0 in 4h7m with 0 errors on Sun Jun 5 08:07:51 2016
config:
NAME STATE READ WRITE CKSUM
dysk1 ONLINE 0 0 0
15818043469097691515 ONLINE 0 0 0
16514528765313984671 ONLINE 0 0 0
3110422553662067588 ONLINE 0 0 0
17337481221678615319 ONLINE 0 0 0
4084336948802892907 ONLINE 0 0 0
errors: No known data errors
pool: dysk2
state: ONLINE
scan: scrub repaired 0 in 6h31m with 0 errors on Sun Jun 5 06:31:57 2016
config:
NAME STATE READ WRITE CKSUM
dysk2 ONLINE 0 0 0
7742247340514291022 ONLINE 0 0 0
4521136538135737535 ONLINE 0 0 0
13434009882412867458 ONLINE 0 0 0
15102687328896789506 ONLINE 0 0 0
15337135879999007921 ONLINE 0 0 0
errors: No known data errors
something is wrong with disk naming.
lsblk -f output:
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 vfat 6D7A-6E3B /boot
└─sda2 ext4 2e3c08cd-f8d4-439f-a427-75c09ff3377d /
sdb
├─sdb1 zfs_member dysk1 6932605141065726638
└─sdb9
sdc
├─sdc1 zfs_member dysk1 6932605141065726638
└─sdc9
sdd
├─sdd1 zfs_member dysk1 6932605141065726638
└─sdd9
sde
├─sde1 zfs_member dysk1 6932605141065726638
└─sde9
sdf
├─sdf1 zfs_member dysk2 12961184324134055696
└─sdf9
sdg
├─sdg1 zfs_member dysk2 12961184324134055696
└─sdg9
sdh
├─sdh1 zfs_member dysk2 12961184324134055696
└─sdh9
sdi
├─sdi1 zfs_member dysk2 12961184324134055696
└─sdi9
ls -l /dev/disk/by-uuid/ output:
lrwxrwxrwx 1 root root 10 06-09 10:32 12961184324134055696 -> ../../sdg1
lrwxrwxrwx 1 root root 10 06-09 10:32 2e3c08cd-f8d4-439f-a427-75c09ff3377d -> ../../sda2
lrwxrwxrwx 1 root root 10 06-09 10:32 6932605141065726638 -> ../../sde1
lrwxrwxrwx 1 root root 10 06-09 10:32 6D7A-6E3B -> ../../sda1
Why I cannot see UUID for all disks? What can be done about it?
Last edited by drozdu (2016-06-09 15:29:08)
Offline
Solved.
I disabled ZFS autostart:
systemctl disable zfs.target
did a reboot, and then imported my pools:
zpool import <pool>
then enabled ZFS autostart and after another reboot "zpool status" works.
Offline