You are not logged in.

#1 2020-01-06 00:10:22

cdysthe
Member
Registered: 2009-11-20
Posts: 62

Gparted only seeing old ZFS pool after new install.

Hi,

I was trying out ZFS on my laptop but decided to go back to my normal disk setup with an etx4 /bopt partition and a xfs / partition for all the rest. I partitioned the disk and installed with GNOME desktop. All seemed normal until I fired up GParted and all I saw was the an old ZFS pool on device /dev/nvme0n1. Nothing else.

Then ran parted and saw this output:

Model: WDC PC SN730 SDBQNTY-512G-1001 (nvme)
Disk /dev/nvme0n1: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size   File system  Name                  Flags
 1      1049kB  538MB   537MB  fat32        EFI System Partition  boot, esp
 2      538MB   1088MB  551MB  ext4         boot
 3      1088MB  512GB   511GB  xfs

Blkid shows:

/dev/nvme0n1p3: LABEL="xfs" UUID="2d5ba3d0-d9ef-4465-8cfa-67fe87b6c791" TYPE="xfs" PARTUUID="7d315998-bf5c-48a9-b061-fea86c00a2f0"
/dev/loop0: TYPE="squashfs"
/dev/loop1: TYPE="squashfs"
/dev/loop2: TYPE="squashfs"
/dev/loop3: TYPE="squashfs"
/dev/loop4: TYPE="squashfs"
/dev/loop5: TYPE="squashfs"
/dev/loop6: TYPE="squashfs"
/dev/loop7: TYPE="squashfs"
/dev/nvme0n1p1: LABEL_FATBOOT="EFI" LABEL="EFI" UUID="E066-6430" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="33001af0-5295-43a8-80ed-0ad7a12690e8"
/dev/nvme0n1p2: LABEL="ext4" UUID="11c60a5e-1b2f-4278-ad3b-c7dc475a17dd" TYPE="ext4" PARTLABEL="boot" PARTUUID="1c1aa0fa-3f1e-4274-8f76-060bb748bad1"

fstab is:

UUID=2d5ba3d0-d9ef-4465-8cfa-67fe87b6c791 /               xfs     defaults        0       0
UUID=11c60a5e-1b2f-4278-ad3b-c7dc475a17dd /boot           ext4    defaults        0       2
UUID=E066-6430  /boot/efi       vfat    umask=0077      0       1
/swapfile

Why is GParted showing the old ZFS pool and can I remove it without wiping the disk and start over? The machine runs fine but I would like to get rid of ghost of ZFS lurking somewhere in the background.

Last edited by cdysthe (2020-01-07 15:01:29)

Offline

#2 2020-01-06 16:10:06

Head_on_a_Stick
Member
From: London
Registered: 2014-02-20
Posts: 7,680
Website

Re: Gparted only seeing old ZFS pool after new install.

You can use wipefs(8) to get rid of the magic strings, there were so many on my disk I had to use a shell loop to get rid of them all.

Offline

#3 2020-01-07 15:03:33

cdysthe
Member
Registered: 2009-11-20
Posts: 62

Re: Gparted only seeing old ZFS pool after new install.

Head_on_a_Stick wrote:

You can use wipefs(8) to get rid of the magic strings, there were so many on my disk I had to use a shell loop to get rid of them all.

Just to make sure, I am not affecting the current partitions and Arch install by doing this? Normally I would just try, but when it comes to the disk partitions I like to make sure I won't mess up.

Offline

#4 2020-01-07 15:09:33

Head_on_a_Stick
Member
From: London
Registered: 2014-02-20
Posts: 7,680
Website

Re: Gparted only seeing old ZFS pool after new install.

cdysthe wrote:

Just to make sure, I am not affecting the current partitions and Arch install by doing this?

As long as you only delete the ZFS magic strings then it should be okay. Back up first though, just in case. Run the wipefs command on the partitions without any arguments to find where they are.

Offline

#5 2020-01-11 21:31:44

sitquietly
Member
From: On the Wolf River
Registered: 2010-07-12
Posts: 219

Re: Gparted only seeing old ZFS pool after new install.

I've been stung too many times by stray zfs labels on re-purposed drives.  If the old pool is still actually present on the disk (not just the label) and you have zfs on your system you can clear the old label with

zpool labelclear -f poolname

  If the pool is gone but the label has been left in place you might get rid of the label with

zpool labelclear -f devicename

In general whenever repurposing a disk it is good practice to do an ATA Secure Erase to get rid of gpt labels, zfs labels, encryption keys, and Stuff That Will Bite You If Left Hanging Around.  A secure erase on even my old ssd drives takes less than a minute, on my pile of 1 TB jbods it takes 3 hours but I find it worthwhile to take the time to get a clean disk.

Offline

Board footer

Powered by FluxBB