You are not logged in.
Pages: 1
I just switched my system from ext4 to btrfs with btrfs-convert, deleted the ext2_saved subvolume, ran a balance, and made an @ subvolume for the root (and set as default) and @home for my home directory for snapshots later. I want to try compression, but even after doing seemingly every way to enable compression at once, I do not appear to have any compression.
- I have set compress=zstd:15 in the fstab for both subvolumes
/dev/mapper/arch-root / btrfs rw,relatime,discard,compress=zstd:15,subvol=@ 0 1
/dev/mapper/arch-root /home btrfs rw,relatime,discard,compress=zstd:15,subvol=@home 0 1
- On my kernel command line, I have passed rootflags=compress=zstd:15 to be sure that it gets applied
Running mount confirms that these compress options are applied as expected
/dev/mapper/arch-root on / type btrfs (rw,relatime,compress=zstd:15,ssd,discard,space_cache,subvolid=1213,subvol=/@)
/dev/mapper/arch-root on /home type btrfs (rw,relatime,compress=zstd:15,ssd,discard,space_cache,subvolid=1214,subvol=/@home)
However, compression does not seem to actually happen. btrfs filesystem defragment -r / seems to just increase the amount of space used, whether I run it just like that, with `-c`, or with `-czstd`.
Additionally, I tried running
btrfs property set / compression zstd
btrfs property set /home compression zstd
and then defragging again, with the same effect.
As of right now, filelight reports 548.5 GiB used of which 478.5 GiB is my home directory, du reports 532G of which 477G is in my home directory (assuming "G" is GiB due to how similar the home directory size is to filelight), and
btrfs filesystem usage
reports the highest usage, at 553.36 GiB of data (full output below). I assume that
Overall:
Device size: 779.25GiB
Device allocated: 725.99GiB
Device unallocated: 53.25GiB
Device missing: 0.00B
Device slack: 16.00EiB
Used: 557.09GiB
Free (estimated): 219.88GiB (min: 219.88GiB)
Free (statfs, df): 219.88GiB
Data ratio: 1.00
Metadata ratio: 1.00
Global reserve: 512.00MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:719.99GiB, Used:553.36GiB (76.86%)
Metadata,single: Size:5.97GiB, Used:3.73GiB (62.45%)
System,single: Size:32.00MiB, Used:128.00KiB (0.39%)
Quite a large portion of my data should be compressible. I expect that if compression is being applied, the amount of space used for data should go down, used space + minimum free space should be equal to the size of the drive, and estimated free space should be greater than minimum free space. Am I missing something?
Offline
The compression only happens when files are written.
AFAIU btrfs-convert won't compress the files during conversation.
* Good formatted problem description will cause good and quick solution
* Please don't forget to mark as [SOLVED].
Offline
I thought running the defragment operation was supposed to rewrite all of the files, thus causing them to be compressed?
Offline
According to this you can run: btrfs filesystem defrag -czstd <FILE>. So perhaps you can use
find /... -exec...
Try on a single file to check.
* Good formatted problem description will cause good and quick solution
* Please don't forget to mark as [SOLVED].
Offline
Tried it on a pop os ISO I had lying around, because I've heard it said that ISOs tend to be compressible:
% btrfs filesystem du pop-os_22.04_amd64_nvidia_40.iso
Total Exclusive Set shared Filename
2.84GiB 1.43GiB 1.41GiB pop-os_22.04_amd64_nvidia_40.iso
% btrfs filesystem defragment -czstd pop-os_22.04_amd64_nvidia_40.iso
% btrfs filesystem du pop-os_22.04_amd64_nvidia_40.iso
Total Exclusive Set shared Filename
2.84GiB 2.84GiB 0.00B pop-os_22.04_amd64_nvidia_40.iso
If I'm understanding this right, seems like there was some deduplication and no compression here, and this defrag just undid it and did no compression? It took a while to run, which kinda implies that it was up to something.
In case that wasn't compressible enough, I tried it with 2 GB of zeroes. That should be the best case scenario for any compression algorithm.
% dd if=/dev/zero of=zero.img bs=4M count=512
512+0 records in
512+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 2.91523 s, 737 MB/s
% btrfs filesystem du zero.img
Total Exclusive Set shared Filename
896.00MiB 896.00MiB 0.00B zero.img
% btrfs filesystem defragment -czstd zero.img
% btrfs filesystem du zero.img
Total Exclusive Set shared Filename
2.00GiB 2.00GiB 0.00B zero.img
And if I'm understanding that correctly, there was compression here when the file was initially written, though not nearly as much as would be expected for this type of file (zip with zstd at level 1 compresses it to 64.1 KiB), and the defrag completely removed it
Offline
Yes, AFAIU Exclusive show how much is not "deduped" and to check the compression you can compare btrfs filesystem du with the regular du.
Having both the compression and dedup might be counterproductive since most of the dedup approaches (like bees) is done on the block level while compression is on the file level.
* Good formatted problem description will cause good and quick solution
* Please don't forget to mark as [SOLVED].
Offline
Check with compsize
# compsize /
Processed 76260 files, 51343 regular extents (55891 refs), 42894 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 40% 1.2G 2.9G 3.3G
none 100% 350M 350M 360M
lzo 28% 9.6M 34M 35M
zstd 32% 865M 2.5G 2.9G
prealloc 100% 3.7M 3.7M 2.5M
Having both the compression and dedup might be counterproductive since most of the dedup approaches (like bees) is done on the block level while compression is on the file level.
Not exactly. It's file extends and 128KiB blocks.
Both work fine together.
Works with btrfs compression - dedupe any combination of compressed and uncompressed files
Side note:
defrag breaks COW/dedup
Offline
Offline
Pages: 1