You are not logged in.
Hi, I used btrfs for my /home, because GNOME notify me disk is full, so, i delete many files, but it still notify me disk is full.
after a while, arch linux is crash because the disk is can't write.
# btrfs fi show 1 ✘
Label: 'HOME' uuid: 35fdbc51-4db2-4e60-a585-3d8e381e2f85
Total devices 1 FS bytes used 818.24GiB
devid 1 size 900.00GiB used 899.99GiB path /dev/nvme0n1p6
I try fix it with: `btrfs balance start -v -dusage=0 /mnt`, but get follow mssage
# sudo btrfs balance start -v -dusage=0 /mnt 1 ✘
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=0
Done, had to relocate 0 out of 854 chunks
I don't want try more myself because i don't want missing my data for error command, anyway, current, still can visit my data, just can't boot system when mount it as /home
can anyone help me on fix this disk full issue?
I use `rm` command delete many files, even, several 100G big file, disk is really full is impossible.
Thanks
Last edited by zw963 (2024-10-10 11:49:50)
Offline
Hello. Don’t you have any snapshots? Deleting files isn’t freeing up space, if those files are also in snapshots and they share allocations.
If not, providing the full output of that may help debugging:
sudo btrfs filesystem usage /path/to/filesystem
Note that filter `usage=0` only reclaims empty chunks. It’s very quick and never fails, but the reason it’s quick and doesn’t fail is that no reallocation is done. It basically does housekeeping, if a large amounts of data were removed and left some completely unused chunks.⁽¹⁾ You may need to increase the value, so an actual reallocation is done. The downside is that the operation itself is longer and may fail if there is not enough space (btrfs version of catch 22).
____
⁽¹⁾ Though btrfs-balance claims it’s no longer needed.
Last edited by mpan (2024-10-08 22:57:33)
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Hi, i delete almost all snapshot which create by btrbk, then run following command
╰─ $ btrfs filesystem usage /mnt/btr_pool
Overall:
Device size: 900.00GiB
Device allocated: 876.99GiB
Device unallocated: 23.00GiB
Device missing: 0.00B
Device slack: 0.00B
Used: 674.42GiB
Free (estimated): 147.30GiB (min: 135.80GiB)
Free (statfs, df): 147.30GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:780.98GiB, Used:656.68GiB (84.08%)
/dev/nvme0n1p6 780.98GiB
Metadata,DUP: Size:48.00GiB, Used:8.87GiB (18.47%)
/dev/nvme0n1p6 96.00GiB
System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
/dev/nvme0n1p6 16.00MiB
Unallocated:
/dev/nvme0n1p6 23.00GiB
Is usable disk space being released now? (Sorry, I'm not familiar with btrfs command)
Thanks
EDIT:
Could you please help on which line stand for the really free disk usage?
Last edited by zw963 (2024-10-09 05:02:16)
Offline
How much space do you expect to be free at this point? You should be able to fit at least 20 GB at this point, possibly even over 100 GB.
The amount of space allocated for data is a bit higher than than the size of the data, so you could benefit from forcing reallocation with `btrfs balance`, with higher threshold (you may start with 10 or 20). But with current allocations taking 98% of available capacity already the operation may fail.
If what you observe is notably different from what you expect: did you try to reboot? Chances are low, but it’s possible some process is still keeping unnamed files in the filesystem.
Answering your last question: there is no “really free”. There never has been. The concept itself is an illusion. With ultra-simple filesystem that illusion holds reasonably well until the amount of data is close to filesystem size. In particular if your definition of “free space” ignores performance. But it falls apart much faster with complex filesystems like btrfs.
Last edited by mpan (2024-10-09 06:05:25)
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Thanks for help, i try rebalance use 10 like following:
╰─ $ sudo btrfs balance start -v -dusage=10 /mnt/btr_pool
Dumping filters: flags 0x1, state 0x0, force is off
DATA (flags 0x2): balancing, usage=10
Done, had to relocate 37 out of 831 chunks
Then the result like following: (unallocated from 23 change to 60)
Overall:
Device size: 900.00GiB
Device allocated: 839.99GiB
Device unallocated: 60.00GiB
Device missing: 0.00B
Device slack: 0.00B
Used: 674.66GiB
Free (estimated): 147.05GiB (min: 117.05GiB)
Free (statfs, df): 147.05GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:743.98GiB, Used:656.93GiB (88.30%)
/dev/nvme0n1p6 743.98GiB
Metadata,DUP: Size:48.00GiB, Used:8.87GiB (18.47%)
/dev/nvme0n1p6 96.00GiB
System,DUP: Size:8.00MiB, Used:128.00KiB (1.56%)
/dev/nvme0n1p6 16.00MiB
Unallocated:
/dev/nvme0n1p6 60.00GiB
Should i continue re balance use 20 again?
BTW: following is the fstab mount setting for my btrfs partition. (i use zstd compress)
LABEL=HOME /mnt/btr_pool btrfs rw,noatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/,compress=zstd 0 0
Thanks
Last edited by zw963 (2024-10-09 07:23:51)
Offline
i delete almost all snapshot which create by btrbk
I think you should delete some more. Older snapshots will take up more space so start with those.
Para todos todo, para nosotros nada
Offline
zw963: you still didn’t answer, how much free space do you expect to see. What makes you think there should be that much free space?
If you wish, and you don’t mind time it may take, you may go all the way to 99. But the higher threshold you set, the less you really gain. btrfs needs some air to work and allocate new chunks. Before running balance with `-dusage=10` that budget seemed pretty tight. But now this is no longer the case.
Note that compression introduces another layer of complexity with free space estimation. If new data is easily compressible, you can write a lot of it. If it is not, the amount of free space may be greatly overestimated.
Sometimes I seem a bit harsh — don’t get offended too easily!
Offline
Thanks, i try run again use -dusage=20 again, now i saw the free disk space is about: 222G (use gparted), my system work well for now.
zw963 wrote:i delete almost all snapshot which create by btrbk
I think you should delete some more. Older snapshots will take up more space so start with those.
Hi, i run btrbk like following:
$: btrbk prune --preserve-backups
It left snapshots which exists backups in an portable disk untouched, i guess i need figure out a way to clean both of them up to make more disk space available.
you still didn’t answer, how much free space do you expect to see. What makes you think there should be that much free space?
I deleted many files, i thought probably 200G? i guess some file (e.g. virtualbox VM files) has high compression ratio, but when i use duf check the free space, it only 135G available, but if use gparted, the result is 220G.
╰─ $ duf /home
+------------------------------------------------------------------------------------------------+
| 1 local device |
+------------+--------+--------+--------+-------------------------------+-------+----------------+
| MOUNTED ON | SIZE | USED | AVAIL | USE% | TYPE | FILESYSTEM |
+------------+--------+--------+--------+-------------------------------+-------+----------------+
| /home | 900.0G | 687.3G | 135.3G | [###############.....] 76.4% | btrfs | /dev/nvme0n1p6 |
anyway, I think the space I deleted is bigger than the space left now, i still need clean up more snapshots, i guess i can try run "btrbk prune --wipe" later.
Thanks
Last edited by zw963 (2024-10-10 11:51:10)
Offline
Hi, thanks for all, after delete all snapshots and backups, current available space is 500G now report by gparted.
Offline