You are not logged in.
What is it with flesystems and usable disk space?
I always wondered as why ext2, ext3, ext4 and a few other filesystems won't let you use all of your free disk space.
In some cases up to 5 percent of the raw disk space are unusable after formating (I am not talking about the -m "reserved blocks" format option in ext234). On a 1TB harddrive, this is about 50 Gigabytes of wasted space
I wrote a small bash script [1] to make a more demonstrative output for you.
This is the output (look for the number of files that could be created on that disk image):
Creating test.img
Creating directory /mnt/test
[mkfs.ext4] Running mkfs.ext4 on test.img
[mkfs.ext4] Mounting test.img
[mkfs.ext4] Running df -h
/dev/loop0 1008M 34M 924M 4% /mnt/test
[mkfs.ext4] Creating many 1MB file chunks with random data
[mkfs.ext4] Running df -h a second time
/dev/loop0 1008M 1008M 0 100% /mnt/test
[mkfs.ext4] Number of files
977
[mkfs.ext4] Unmounting...
[mkfs.jfs] Running mkfs.jfs on test.img
[mkfs.jfs] Mounting test.img
[mkfs.jfs] Running df -h
/dev/loop0 1020M 260K 1020M 1% /mnt/test
[mkfs.jfs] Creating many 1MB file chunks with random data
[mkfs.jfs] Running df -h a second time
/dev/loop0 1020M 1020M 0 100% /mnt/test
[mkfs.jfs] Number of files
1020
[mkfs.jfs] Unmounting...
[mkfs.xfs -f] Running mkfs.xfs -f on test.img
[mkfs.xfs -f] Mounting test.img
[mkfs.xfs -f] Running df -h
/dev/loop0 1014M 33M 982M 4% /mnt/test
[mkfs.xfs -f] Creating many 1MB file chunks with random data
[mkfs.xfs -f] Running df -h a second time
/dev/loop0 1014M 1014M 20K 100% /mnt/test
[mkfs.xfs -f] Number of files
983
[mkfs.xfs -f] Unmounting...
[mkfs.btrfs] Running mkfs.btrfs on test.img
[mkfs.btrfs] Mounting test.img
[mkfs.btrfs] Running df -h
/dev/loop0 1.0G 56K 894M 1% /mnt/test
[mkfs.btrfs] Creating many 1MB file chunks with random data
[mkfs.btrfs] Running df -h a second time
/dev/loop0 1.0G 436M 255M 64% /mnt/test
[mkfs.btrfs] Number of files
690
[mkfs.btrfs] Unmounting...
Do you experience the same thing? Did you even notice/know about it?
What can one do about it? I mean JFS for example lets one use almost all of the raw disk space, why are the other filesystems greedier?
Offline
I think tune2fs let's you change the default behavior.
https://bbs.archlinux.org/viewtopic.php?pid=657163 - it's about the '-m' option, so it may not help after all.
Last edited by karol (2011-02-21 14:50:07)
Offline
Because different file systems use different logics to store the files? ext* and JFS are definetly differ enough in metadata storage technique to explain this. I don't think there's much you can do about it
Last edited by adee (2011-02-21 15:57:02)
Offline
Well I have now decided to use JFS for both the root and home partition. JFS does not waste space and with the deadline scheduler it ist alright
Offline
Regardless of what filesystem you use, you will never, ever get 100% of your potential storage space out of a hard disk. I believe that, on a standard 512-byte-per-sector disk, the average space efficiency rate is 86% or so.
Offline
But you get more with JFS than you'll get with ext4.
Offline
Most filesystems reserve X amount of space to allow root management.
less +/-m tunefs
Offline
Most filesystems reserve X amount of space to allow root management.
less +/-m tunefs
OP wrote in his first post: "I am not talking about the -m "reserved blocks" format option in ext234".
Offline
If you use XFS you get more usable space:
Partition capacity
Initial (after filesystem creation) and residual (after removal of all files) partition capacity was computed as the ratio of number of available blocks by number of blocks on the partition. Ext3 has the worst inital capacity (92.77%), while others FS preserve almost full partition capacity (ReiserFS = 99.83%, JFS = 99.82%, XFS = 99.95%). Interestingly, the residual capacity of Ext3 and ReiserFS was identical to the initial, while JFS and XFS lost about 0.02% of their partition capacity, suggesting that these FS can dynamically grow but do not completely return to their inital state (and size) after file removal.
The thing is about how filesystem handles it's own structure I used XFS on my media box and had those 99% of space, ext3/4 was much worse... But I switched to ext4 due to slow file deletes on XFS, I may consider reiser4 when I'll have time.
Last edited by Kirurgs (2011-02-23 06:45:47)
Offline
If you use XFS you get more usable space:
My test shows otherwiese. Could you please try and run my script on your box so we can see if the behaviour is reproduceable?
Offline
I switched to ext4 some time ago, I won't be able to reproduce the problem. Cause (already mentioned in previous post) was that XFS had bad file deletion performance. If memory serves me well, otherwise XFS was really the best, space, performance etc.
Offline
added reiser4 and reiserfs to mine
umount: /dev/shm/test: not found
Creating test.img
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.09046 s, 985 MB/s
Creating directory /dev/shm/test
[mkfs.ext4] Running mkfs.ext4 on test.img
[mkfs.ext4] Mounting test.img
[mkfs.ext4] Running df -h
/dev/shm/test.img 1008M 34M 924M 4% /dev/shm/test
[mkfs.ext4] Creating many 1MB file chunks with random data
[mkfs.ext4] Running df -h a second time
/dev/shm/test.img 1008M 1008M 0 100% /dev/shm/test
[mkfs.ext4] Number of files
977
[mkfs.ext4] Unmounting...
[mkfs.jfs] Running mkfs.jfs on test.img
[mkfs.jfs] Mounting test.img
[mkfs.jfs] Running df -h
/dev/shm/test.img 1020M 260K 1020M 1% /dev/shm/test
[mkfs.jfs] Creating many 1MB file chunks with random data
[mkfs.jfs] Running df -h a second time
/dev/shm/test.img 1020M 1020M 0 100% /dev/shm/test
[mkfs.jfs] Number of files
1020
[mkfs.jfs] Unmounting...
[mkfs.xfs -f] Running mkfs.xfs -f on test.img
[mkfs.xfs -f] Mounting test.img
[mkfs.xfs -f] Running df -h
/dev/shm/test.img 1014M 33M 982M 4% /dev/shm/test
[mkfs.xfs -f] Creating many 1MB file chunks with random data
[mkfs.xfs -f] Running df -h a second time
/dev/shm/test.img 1014M 1014M 20K 100% /dev/shm/test
[mkfs.xfs -f] Number of files
983
[mkfs.xfs -f] Unmounting...
[mkfs.btrfs] Running mkfs.btrfs on test.img
[mkfs.btrfs] Mounting test.img
[mkfs.btrfs] Running df -h
/dev/shm/test.img 1.0G 56K 1.0G 1% /dev/shm/test
[mkfs.btrfs] Creating many 1MB file chunks with random data
[mkfs.btrfs] Running df -h a second time
/dev/shm/test.img 1.0G 822M 203M 81% /dev/shm/test
[mkfs.btrfs] Number of files
895
[mkfs.btrfs] Unmounting...
[mkfs.reiser4 -f] Running mkfs.reiser4 -f on test.img
[mkfs.reiser4 -f] Mounting test.img
[mkfs.reiser4 -f] Running df -h
/dev/shm/test.img 973M 148K 973M 1% /dev/shm/test
[mkfs.reiser4 -f] Creating many 1MB file chunks with random data
[mkfs.reiser4 -f] Running df -h a second time
/dev/shm/test.img 973M 971M 2.7M 100% /dev/shm/test
[mkfs.reiser4 -f] Number of files
972
[mkfs.reiser4 -f] Unmounting...
[mkfs.reiserfs -f] Running mkfs.reiserfs -f on test.img
[mkfs.reiserfs -f] Mounting test.img
[mkfs.reiserfs -f] Running df -h
/dev/shm/test.img 1.0G 33M 992M 4% /dev/shm/test
[mkfs.reiserfs -f] Creating many 1MB file chunks with random data
[mkfs.reiserfs -f] Running df -h a second time
/dev/shm/test.img 1.0G 1.0G 108K 100% /dev/shm/test
[mkfs.reiserfs -f] Number of files
992
[mkfs.reiserfs -f] Unmounting...
interesting results, i'd say.
Offline
Offline