You are not logged in.
Hey all,
Just a question that popped up in my head a few days ago when fsck ran during boot, because /home had been mounted 23 times without a check.
Why does it run after 23 mounts, and not any other number? Is there a reason; did data corruption happen more after 23 mounts in testing, or what? I did a little digging on Google and nothing came up.
Can someone enlighten me on this burning question of curiosity?
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
tune2fs -l $device
Particularly, notice the 'mount count' and 'maximum mount count'. You can adjust this number with tune2fs's -c flag.
I see.. does this somehow answer my question on why they chose the random number of 23? Not sure that I get it
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
If you have other partitions, you'll find that those tend to have different numbers for max mount count.
If you want to find out which range is used and why, the sourcecode of the e2fsprogs package should be of help.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
If you have other partitions, you'll find that those tend to have different numbers for max mount count.
If you want to find out which range is used and why, the sourcecode of the e2fsprogs package should be of help.
Indeed, /dev/sda1 has -1 and /dev/sda2 has 23. I think looking into the sourcecode is a tad much to answer this question, though
If you can't sit by a cozy fire with your code in hand enjoying its simplicity and clarity, it needs more work. --Carlos Torres
Offline
The -1 probably comes from your fstab setting of not checking.
I think looking into the sourcecode is a tad much to answer this question, though
Why? You'll read my code ... e2fsck is much more organized. I just downloaded it and found an answer quickly enough, and learned a bit from it too.
Last edited by Trilby (2013-01-14 12:04:20)
"UNIX is simple and coherent" - Dennis Ritchie; "GNU's Not Unix" - Richard Stallman
Offline
for reference : my ext4 partitions use 24,27 and 38 as max mount count.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
clean chroot building not flexible enough ?
Try clean chroot manager by graysky
Offline
for reference : my ext4 partitions use 24,27 and 38 as max mount count.
Well, there goes the prime number hypothesis that I was about to float...
Offline
AFAIK, it's psuedo-randomly chosen when the file-system is created to avoid all partitions being checked after the same number of mounts (ie, at the same time).
That would annoy users who suddenly have to wait for all their partitions to be checked on one particular reboot, instead of having the checks spread out.
Are you familiar with our Forum Rules, and How To Ask Questions The Smart Way?
BlueHackers // fscanary // resticctl
Offline