You are not logged in.

#1 2016-08-18 14:09:40

stefano
Member
Registered: 2011-04-09
Posts: 258

[SOLVED] ZFS --> replacing drive with changed bus ids

I have a file server with 5 hard drives: 4 internal sata drives in a ZFS-2 raid configuration which hold data and an external small hard drive connected through USB for the archlinux system.
Usually, the data drives are assigned sda through sdd slots, and the system drive is sde. Yesterday, one of the data drives failed. zpool status showed something like this (going from memory) :

  pool: zfsdatapool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: resilvered 42.1M in 0h4m with 0 errors on Mon Aug 15 08:51:10 2016
config:

        NAME                      STATE     READ WRITE CKSUM
        zfsdatapool               DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            sda                   ONLINE       0     0     0
            sdb                   ONLINE       0     0     0
            sdc                   UNAVAIL      0     0     0
            sdd                   ONLINE       0     0     0 

So I took out the faulted drive and put a new one into the same slot, ready to replace it in the zfs pool (I have done this several times).
At reboot, though, the system decided to redistribute the drives ids, with the system drive now ending up on sdd and the fourth data drive (formerly sdd) now being sde:

[stefano@polus ~]$ lsblk -f
NAME   FSTYPE     LABEL       UUID                                 MOUNTPOINT
fd0                                                                
sda                                                                
|-sda1 zfs_member zfsdatapool 7407363603306778712                  
`-sda9                                                             
sdb                                                                
|-sdb1 zfs_member zfsdatapool 7407363603306778712                  
`-sdb9                                                             
sdc                                                                
sdd                                                                
|-sdd1 ext4                   e6ac9c77-e717-46bb-a63c-88ac9518ff28 /boot
|-sdd2 ext4                   ad9b956a-a060-4b5e-99ca-04b26d955663 /
|-sdd3 swap                   f54ca67b-74c8-44bd-aa53-18902677b051 [SWAP]
|-sdd4 ext4                   30dc79cf-4a92-41d0-9b73-f15ffe092771 /home
`-sdd5                                                             
sde                                                                
|-sde1 zfs_member zfsdatapool 7407363603306778712                  
`-sde9                                                             

I was expecting to see a similar rearrangenment in the output of zpoo status, but this is what I see instead:

  pool: zfsdatapool
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: resilvered 42.1M in 0h4m with 0 errors on Mon Aug 15 08:51:10 2016
config:

        NAME                      STATE     READ WRITE CKSUM
        zfsdatapool               DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            sda                   ONLINE       0     0     0
            sdb                   ONLINE       0     0     0
            13304334072366138350  UNAVAIL      0     0     0  was /dev/sdc1
            7694069885512566572   UNAVAIL      0     0     0  was /dev/sdd1

errors: No known data errors

It looks like the pool is down to two disks instead of 3. How can I recover the perfectly functioning disk that used to be sdd and is now on sde? AS far as I know there is now way to predict how the system will assign the sda, sdb, etc ids.

Last edited by stefano (2016-08-18 17:52:46)

Offline

#2 2016-08-18 16:51:00

ukhippo
Member
From: Non-paged pool
Registered: 2014-02-21
Posts: 366

Re: [SOLVED] ZFS --> replacing drive with changed bus ids

Using sdX is always a bad idea and not recommended for production pools.

Try doing:

zpool export zfsdatapool
zpool import -d /dev/disk/by-id zfsdatapool

If that doesn't work, try adding  “zfs_import_dir=/dev/disk/by-id” to your kernel command line, (you may also have to delete the zpool.cache file)

Offline

#3 2016-08-18 17:52:23

stefano
Member
Registered: 2011-04-09
Posts: 258

Re: [SOLVED] ZFS --> replacing drive with changed bus ids

ukhippo wrote:

Using sdX is always a bad idea and not recommended for production pools.

Try doing:

zpool export zfsdatapool
zpool import -d /dev/disk/by-id zfsdatapool

If that doesn't work, try adding  “zfs_import_dir=/dev/disk/by-id” to your kernel command line, (you may also have to delete the zpool.cache file)


Thanks a lot, exporting and reimporting the pool worked perfectly.

Last edited by stefano (2016-08-18 17:53:43)

Offline

Board footer

Powered by FluxBB