You are not logged in.
Pages: 1
Topic closed
'df'
/dev/sda1 1,9T 1,7T 200G 90% /
'btrfs fi df /'
Data, single: total=1.80TiB, used=1.62TiB
System, DUP: total=32.00MiB, used=240.00KiB
Metadata, DUP: total=4.00GiB, used=2.73GiB
'btrfs fi show /'
Label: 'root' uuid: 97ccc2db-5bdf-45f6-8d9f-daadea02b918
Total devices 1 FS bytes used 1.62TiB
devid 1 size 1.82TiB used 1.81TiB path /dev/sda1
So they are very much the same. But if I calculate the actual size of all the files with 'du' or 'ncdu' they give 1108G total.
So 1.8T - 1.1T = 0.7T
Where are those 500G from?
I have no snapshots, just subvolumes and I calculated all of them. It is not a converted fs. It is lzo compression enabled, not forced. I tried a 'btrfs fi balance' but nothing changed.
I thought it might be a cosmetic issue, but using 'fallocate' I created a 200G file, and then it failed to create another one of 15G because of full disk.
* Most of the files are media, and so unlikely to compress.
** Just to clarify:
btrfs subvolume list -p /
ID 257 gen 234549 parent 5 top level 5 path __active
ID 258 gen 234572 parent 257 top level 257 path home
ID 259 gen 234641 parent 257 top level 257 path var
ID 262 gen 234643 parent 257 top level 257 path media
with 'du --total --block-size=1G -x' I got
/=3G
/home=15G
/media=1089G
/var=1G
all equals 1108G
Last edited by eduardo.eae (2014-04-20 18:57:12)
Offline
My btrfs volume usage went down with a few gb on ~30gb total after disabling lzo. Compressing makes a lot of files actually bigger I guess.
Found this.
Last edited by Rexilion (2014-04-20 08:02:40)
fs/super.c : "Self-destruct in 5 seconds. Have a nice day...\n",
Offline
It would not make any sense that compression makes bigger files.
Is it any way to force a decompression of all files? With defrag one can force a compression.
The free space reported by df and btrfs is roughly the same. But accounting all files gives much lower total files size.
Offline
It would not make any sense that compression makes bigger files.
if a file doesn't lend itself to compression (e.g. is already in a compressed format, is just random garbage) the overhead an archive format introduces will actually make a file bigger, tho not by a lot
Last edited by the_shiver (2014-04-20 11:36:16)
Offline
The extra is due to metadata.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
'Metadata, DUP: total=4.00GiB, used=2.73GiB
[...]
So they are very much the same. But if I calculate the actual size of all the files with 'du' or 'ncdu' they give 1108G total.
[...]
The space occupied is 500G bigger than the size of all the files.
Offline
eduardo.eae wrote:It would not make any sense that compression makes bigger files.
if a file doesn't lend itself to compression (e.g. is already in a compressed format, is just random garbage) the overhead an archive format introduces will actually make a file bigger, tho not by a lot
btrfs doens't compress files that are already compressed/ don't compress well, unless you use the compress-force flag.
with compress flag, it wont compress files unless the resulting file is smaller than the original [source]
Last edited by ooo (2014-04-20 13:39:21)
Offline
That's what I thought, thanks.
Still there are 500G unaccounted for.
I have btrfs in another 3 pcs and none of them show this problem.
Using du and ncdu gives me 1.1TB of total files. But btrfs is showing 1.6TB occupied. No snapshots, just subvolumes and they are accounted for.
Offline
@ooo
my comment was meant in general, not specificly about btrfs which does skip such files as you pointed out
Offline
The space occupied is 500G bigger than the size of all the files.
I see that now... not sure why... if copy off/reformat/copy back does the problem in accounting as well come back? Perhaps first install the btrfs-utils from [testing] prior to the reformat if you choose to do this.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
If I backup all and reformat, I will put something more stable and safe like ext4. But until next weekend I have no disk to backup to .
I've been searching and no one seems to have the same problem. I really liked btrfs, but with this problem is a no go. The disk is like 4 months old and it is a clean install.
* Updated first post to make clear from where I got the numbers
Last edited by eduardo.eae (2014-04-20 18:59:15)
Offline
Perhaps you should ask on the btrfs ML.
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Thanks for the help, I'll just format it with ext4 whenever I can.
I remembered that also whenever I enabled quota, the memory usage went high until 'out of memory'.
Too many problems for a headless homeserver.
Offline
Just wondering, are you using space_cache or inode_cache mount options? If so, maybe mounting with clear_cache could help..
clear_cache (since 2.6.37)
Clear all the free space caches during mount. This is a safe option, but will trigger rebuilding of the space cache, so leave the filesystem mounted for some time and let the rebuild process finish. If the process btrfs-freespace is actively doing some IO, it's probably not finished yet. This mount option is intended to be used one time and only after you notice some problems with free space.
Offline
It is using space_cache. Tried clear_cache and nothing changed. Thanks anyway.
Offline
I came with the same problem and this topic help me
https://askubuntu.com/questions/464074/ … 6458d76ca1
Last edited by 86kkd (2024-02-06 14:24:14)
Offline
@86kkd: you couldn't wait just 2.5 more months to make this a DECADE necrobump?
Offline
Indeed, thanks for providing a link that explains this concept, but please pay attention to the dates and don't bump 10 year old posts.
Closing.
Offline
Pages: 1
Topic closed