You are not logged in.
I've recently formated a 1.5 TB drive with the intention of i. a. replacing ntfs with ext4.
Then I noticed that the files I saved don't fit on the new partition. df reports:
ext4:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 1442146364 71160 1442075204 1% /media/Seagate
ntfs (similar to all other options that gparted offers):
/dev/sdb1 1465137148 110700 1465026448 1% /media/Seagate
If my math doesn't fail me, that 1K-blocks difference means a glaring 20 GB less usable space at least.
I've searched the forum, but found nothing aside from the usual tune2fs -O \^has_journal / -r 0 / -m 0 which are not relavant here.
Is there any way to reclaim that space? I have to use ext4 or ntfs as I require access from Windows every now and then.
(What's the cause of that difference, anyway? I mean, I'd understand if it were some 100 MB, but 20+ GB‽)
Last edited by misc (2011-05-21 21:33:05)
Offline
By default, 5 % of a drive is reserved for superuser. I thought the -m 0 would reset them to 0 % reserved but according to your post, no help. I dunno
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
https://wiki.archlinux.org/index.php/Ex … ved_blocks
it's more like to be related to reserved blocks, 'cause the same thing happens to me. but nah. also, post a `df -h`.
Last edited by JokerBoy (2011-05-21 13:00:40)
Arch64/DWM || My Dropbox referral link
Offline
I appreciate your replies, but I've tried to made it clear that it really is not related to reserved blocks (which are set to 0). That's not possible anyway – those 23 million missing 1k-blocks are not available for reservation to begin with. (df output is in my initial post.)
Last edited by misc (2011-05-21 13:19:52)
Offline
please post the output of " fdisk -l /dev/sd? " (replace the ? with the appropriate symbol for the drive) .
That will show us any partitions present, as well as some data about the physical characteristics of the drive.
Disliking systemd intensely, but not satisfied with alternatives so focusing on taming systemd.
(A works at time B) && (time C > time B ) ≠ (A works at time C)
Offline
fdisk -l /dev/sdc:
WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000Device Boot Start End Blocks Id System
/dev/sdc1 1 2930277167 1465138583+ ee GPT
df /media/Seagate:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc1 1442146364 71160 1442075204 1% /media/Seagate
Offline
Not much help, but fwiw I've seen the same thing with ext4. I have two Western Digital 1TB external drives. One is formatted with ext4 and the other uses jfs. I keep all of my media/documents/junk on the ext4 drive, which is then mirrored on the jfs drive. Both drives have the exact same files and I think I've done the tune2fs stuff properly on the ext4 drive. Still, it has 15G less total space available.
[mike@robots ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdd1 932G 336G 597G 36% /media/8c4d270d-6139-4fed-8a32-2df8d28acdf3
/dev/sdb1 917G 337G 581G 37% /media/3d824490-860b-4bb5-8fce-ddfe76afc5a3
[mike@robots ~]$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe8900690
Device Boot Start End Blocks Id System
/dev/sdb1 2048 1953525167 976761560 83 Linux
[mike@robots ~]$ sudo fdisk -l /dev/sdd
Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes
81 heads, 63 sectors/track, 382818 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006b9b8
Device Boot Start End Blocks Id System
/dev/sdd1 2048 1953525167 976761560 83 Linux
[mike@robots ~]$ tune2fs -l /dev/sdb1
tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: <none>
Last mounted on: /media/3d824490-860b-4bb5-8fce-ddfe76afc5a3
Filesystem UUID: 3d824490-860b-4bb5-8fce-ddfe76afc5a3
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 61054976
Block count: 244190390
Reserved block count: 0
Free blocks: 152137237
Free inodes: 60900134
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 965
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Fri Jul 30 10:48:53 2010
Last mount time: Sat May 21 12:00:01 2011
Last write time: Sat May 21 12:00:01 2011
Mount count: 85
Maximum mount count: 21
Last checked: Fri Jul 30 10:48:53 2010
Check interval: 15552000 (6 months)
Next check after: Wed Jan 26 09:48:53 2011
Lifetime writes: 1443 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 816d343c-1777-4d46-8e37-d229a99ed00d
Journal backup: inode blocks
Offline
I posted the question at stackexchange:
Turns out it's due to how ext{2-4} organize: E. g., they allocate space for a fixed amount of inodes right at creation time, whereas others like ntfs do this gradually during operation. By default, a quite low divisor / inode size is used to calculate the inode count (see /etc/mke2fs.conf), so the partition ended up with ~91.5m which occupied that 22 GiB. I've recreated the partition with "mkfs.ext4 -m 0 -O sparse_super -T largefile4", the latter option means a inode_ratio of 4194304 instead of 256. Therefore the inode count is now 357728 (i. e. ~87 MiB) and available blocks are at 1464880364.
Last edited by misc (2011-05-21 21:41:29)
Offline
What's the down side of that tweak? Potential to run out of inodes?
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Tmk that (particularly if one were to use that option for way smaller partitions) and this mythtv page warns that it can (for ext3 at least) "drastically increase the amount of time required to delete a large file". There may be more disadvantages, of course.
Given that my / has 983040 inodes of which 573466 are free I'm anything but worried, however.
Offline
"mkfs.ext4 -m 0 -O sparse_super -T largefile4", the latter option means a inode_ratio of 4194304 instead of 256.
In fact the default inode_ratio is 16384, not 256. Also sparse_super is already a default, no need to set that.
I used "mkfs.ext4 -i 262144" on my 1TB partition to set the inode_ratio to 256KiB, after checking that the average file size was going to be about twice bigger than that... (the -i option sets the inode_ratio directly, instead of using -T largefile (1MiB inode_ratio) or -T largefile4 (4MiB inode_ratio.)) I gained 13GiB that way.
Offline