You are not logged in.

#1 2015-09-18 13:49:41

predmijat
Member
Registered: 2014-09-30
Posts: 39

HDD mount problem (bad FSTYPE)

Hi,

I have SSD and HDD in my PC. HDD is used for storage - it has only one partition ( /dev/sdb1 ) formatted as ext4.
fstab entry:

UUID=90292076-0e96-440f-9766-9b1dd518ac76               /storage                ext4            defaults,noatime                                                                        0 0

Everything worked great until today - I've updated my system, powered it off, then on again and it hanged on boot complaing that it can't mount device UUID of /dev/sdb1.

"lsblk -f" now shows that FSTYPE of /dev/sdb1 is "zfs_member" and LABEL is "zstorage".
"pacman -Qs | grep zfs" gives 0 (zero) results.

If I do "mount /dev/sdb1 /storage" I get the following output:

mount: /dev/sdb1: more filesystems detected. This should not happen,
       use -t <type> to explicitly specify the filesystem type or
       use wipefs(8) to clean up the device.

If I then do "mount -t ext4 /dev/sdb1 /storage" it will work. If I do just "mount /storage" it will also work. But it doesn't work on boot.
Gparted says that it's ext4.

So I guess my questions are:

1. How did this happen?
2. How can I get rid of "more filesystems detected"?
3. Why does "mount /storage" work ok, but mounting during boot (using that same /etc/fstab) hangs?

Thanks!

Last edited by predmijat (2015-09-18 14:09:15)

Offline

#2 2015-09-18 21:57:19

xificurC
Member
Registered: 2015-02-06
Posts: 9

Re: HDD mount problem (bad FSTYPE)

I had a very similar issue, only I couldn't even mount by hand to /home, only to another location. I got a suggestion on #archlinux IRC channel to run zerofree on the partition, which indeed solved the issue for me.

Offline

#3 2015-09-18 23:50:31

predmijat
Member
Registered: 2014-09-30
Posts: 39

Re: HDD mount problem (bad FSTYPE)

Appreciate your help, but I'd really like a better answer for this...

I've also tried wipefs, but it doesn't work...it gives the following message: "/dev/sdb1: 8 bytes were erased at offset 0x1d1c0fa7800 (zfs_member): 0c b1 ba 00 00 00 00 00", but it doesn't actually do anything...running wipefs again gives the same output as before...

Offline

#4 2015-09-19 07:05:23

x33a
Forum Fellow
Registered: 2009-08-15
Posts: 4,587

Re: HDD mount problem (bad FSTYPE)

What's the output of

# tune2fs -l /dev/sdb1

Also, did you access the drive from some other operating system/distro, perhaps using some live media?

Offline

#5 2015-09-19 07:24:03

predmijat
Member
Registered: 2014-09-30
Posts: 39

Re: HDD mount problem (bad FSTYPE)

I didn't use other os to access it, because I'm able to access it via current os, I just have to mount it manually...Here is the output of tune2fs:

# tune2fs -l /dev/sdb1
tune2fs 1.42.12 (29-Aug-2014)
Filesystem volume name:   <none>
Last mounted on:          /storage
Filesystem UUID:          90292076-0e96-440f-9766-9b1dd518ac76
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash
Default mount options:    user_xattr acl
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              122101760
Block count:              488378385
Reserved block count:     24418919
Free blocks:              194726595
Free inodes:              120465622
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      907
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Flex block group size:    16
Filesystem created:       Sun Nov 10 10:57:23 2013
Last mount time:          Sat Sep 19 09:01:17 2015
Last write time:          Sat Sep 19 09:01:17 2015
Mount count:              755
Maximum mount count:      -1
Last checked:             Sun Nov 10 10:57:23 2013
Check interval:           0 (<none>)
Lifetime writes:          7076 GB
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:	          256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      e2bc9652-576d-499c-afb2-ee49e3b61bcd
Journal backup:           inode blocks

But wipefs gives this:

# wipefs /dev/sdb1
offset               type
----------------------------------------------------------------
0x1d1c0fa7400        zfs_member   [raid]
                     LABEL: zstorage
                     UUID:  12661834248699203227

0x438                ext4   [filesystem]
                     UUID:  90292076-0e96-440f-9766-9b1dd518ac76

and I can't get rid of zfs part:

# blkid -p /dev/sdb1
/dev/sdb1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)
# wipefs /dev/sdb1
offset               type
----------------------------------------------------------------
0x1d1c0fa7400        zfs_member   [raid]
                     LABEL: zstorage
                     UUID:  12661834248699203227

0x438                ext4   [filesystem]
                     UUID:  90292076-0e96-440f-9766-9b1dd518ac76

# wipefs -o 0x1d1c0fa7400 /dev/sdb1
/dev/sdb1: 8 bytes were erased at offset 0x1d1c0fa7400 (zfs_member): 0c b1 ba 00 00 00 00 00
# blkid -p /dev/sdb1
/dev/sdb1: ambivalent result (probably more filesystems on the device, use wipefs(8) to see more details)

edit: I've found this - http://comments.gmane.org/gmane.linux.u … ux-ng/7437 - seems like wipefs might be having problems with zfs...
edit2: I've edited /etc/fstab and replaced UUID of /dev/sdb1 with "/dev/sdb1" - it mount on boot now too. The zfs_member problem is still there, I just found a way around it...ideas are still welcome.

Last edited by predmijat (2015-09-19 09:37:31)

Offline

#6 2015-10-09 04:15:10

dazoe
Member
Registered: 2015-06-21
Posts: 2

Re: HDD mount problem (bad FSTYPE)

I had the same problem...
after using wipefs -o XXX sdXX another zfs_member would show up but in a different location. and after what seemed like a million (really more like 20 just seemed like a lot cause i was doing all of this through a crappy IPKVM) it finally stopped. and the real zfs_member on my drive was left intact, ie: sdb1 - zfs_member , sdb2 - ext4
so i'd say try that. also if you have zfs installed you could try zpool labelclear that might work.

Last edited by dazoe (2015-10-09 04:15:35)

Offline

#7 2015-10-09 04:18:44

predmijat
Member
Registered: 2014-09-30
Posts: 39

Re: HDD mount problem (bad FSTYPE)

Yeah in the meantime I got new HDDs and replace the old one, so I can't check that...it was working fine with /dev/sdb1 in fstab instead of the UUID and that was good enough. I still don't have answers to the questions in OP.

Offline

#8 2016-06-27 21:51:53

gilbertw1
Member
Registered: 2016-04-27
Posts: 14

Re: HDD mount problem (bad FSTYPE)

Holy cow, I just ran into this problem and spent my entire day banging my head against it. I'll put this here in case it helps anyone in the future:

I ran into this problem after using GParted to delete a partition with a ZFS file system on it, and expand the the preceding partition (root - ext4) to absorb the resulting free space. After performing this operation blkid refused to generate an identifier for the partition (so no entry under /dev/disk/by-uuid/) and grub refused to mount the file system. This was due to all the file system tools thinking there were two filesystems (ext4 & zfs) on the partition. I COULD mount it manually by specifying the file system type as 'ext4' manually (mount -t ext4 /dev/...), however I could never figure out how to make Grub mount it on startup.

I was finally able to fix it by taking the above advice and running 'zpool labelclear /dev/..' to remove the label. This however screwed the superblocks and corrupted the drive....however I was able to run 'fsck.ext4' and it was able to regenerate the superblocks. After fsck completed I was able to update fstab and regenerate my grub.cfg and my system (finally!) was able to boot without issue.

It appears that most linux file system tools (gparted, parted, wipefs, to name a few) don't know how to properly cleanup zfs filesystems and result in these labels being left around to screw things up. So I guess the moral of the story is to run 'zpool labelclear' on the partition that had the zfs file system on it, before using it for anything important.

Offline

#9 2016-06-27 22:37:04

alphaniner
Member
From: Ancapistan
Registered: 2010-07-12
Posts: 2,810

Re: HDD mount problem (bad FSTYPE)

It appears that most linux file system tools (gparted, parted, wipefs, to name a few) don't know how to properly cleanup zfs filesystems

Partitioning tools shouldn't be expected to do this in the first place. Formatting tools and wipefs are another matter. The problem seems to be related to the fact that zfs makes numerous signatures, but I believe LVM2 properly locates and wipes them upon LV creation.


But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner

Offline

Board footer

Powered by FluxBB