You are not logged in.
I just noticed that fsck has not been running on my system and found the following:
dumpe2fs -h /dev/sda2 | grep -i 'mount count'
dumpe2fs 1.42.12 (29-Aug-2014)
Mount count: 110
Maximum mount count: -1
and
dumpe2fs -h /dev/sda3 | grep -i 'mount count'
dumpe2fs 1.42.12 (29-Aug-2014)
Mount count: 110
Maximum mount count: -1
My fstab is as follows:
/ ext4 rw,noatime,data=ordered,discard 0 1
/home ext4 rw,noatime,data=ordered,discard 0 2
My understanding is that the "-1" for the maximum mount count means the file system doesn't get checked at all.
What would cause the maximum mount count to be set to -1?
One thing that I did do in the past was to remove the fsck hook from mkinitcpio and copy the appropriate systemd files per the wiki on quiet boot.
Last edited by Gumper (2014-11-23 15:01:28)
Ready yourselves, ready yourselves
Let us shine the light of Jesus in the darkest night
Ready yourselves, ready yourselves
May the powers of darkness tremble as our praises rise .... Casting Crowns-Until The Whole World Hears.
Offline
I'm pretty sure that is something you specify when creating the filesystem. Maybe it's because you defined an interval instead? You can use tune2fs to adjust both settings.
Offline
After adjusting the max mount count with tune2fs, everything is good.
Thanks!
Ready yourselves, ready yourselves
Let us shine the light of Jesus in the darkest night
Ready yourselves, ready yourselves
May the powers of darkness tremble as our praises rise .... Casting Crowns-Until The Whole World Hears.
Offline
Please remember to mark your thread as [Solved] by editing your first post and prepending it to the title.
Offline
Mine is also set to -1:
% sudo dumpe2fs -h /dev/sda3|grep -i 'mount count'
dumpe2fs 1.42.12 (29-Aug-2014)
Mount count: 3
Maximum mount count: -1
The mount count is wrong; it has been mounted more than 3 times.
Is the -1 is the value due to /etc/mke2fs.conf's setting of:
enable_periodic_fsck = 0
??
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
I marked this post as solved, but I'm seeing this when I check my boot partition.
dumpe2fs -h /dev/sda1 | grep -i 'mount count'
dumpe2fs 1.42.12 (29-Aug-2014)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sda1
NAME FSTYPE SIZE LABEL
sda 477G
├─sda1 vfat 512M
├─sda2 ext4 25G
└─sda3 ext4 451.4G
Do I need to be concerned with this, or is this normal for an EFI system boot partition?
Ready yourselves, ready yourselves
Let us shine the light of Jesus in the darkest night
Ready yourselves, ready yourselves
May the powers of darkness tremble as our praises rise .... Casting Crowns-Until The Whole World Hears.
Offline
dumpe2fs description : dump ext2/ext3/ext4 filesystem information
your efi partition sda1 has vfat as filesystem, dumpe2fs doesn't handle fat systems (or zfs, btrfs, ntfs, reiser etc).
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline