You are not logged in.
Pages: 1
Hello, I recently decided to use F2FS with LZ4 compression to store games so I created a new volume with the flags extra_attr, compression, inode_checksum, sb_checksum, lost_found, added compress_algorithm=lz4 to the mount options and used chattr -R +c on the mount point to enable compression for the whole volume. I have verified that compression works by running df, copying a large (compressible) text file to the volume, running df again and subtracting the two outputs. The result was much smaller than the size of the text file shown by ls so I assume transparent compression was working fine.(Correction: I didn't know df reported values in multiples of 1024 so I have no verification that compression is working, df reports a difference of 144kB after copying a 136kB text file that only contains copies of the same word.)
Is there a more convenient way to check the compression ratio of the drive or existing files? The way I mentioned is useless if you haven't run df before copying the file in question. I tried running du --apparent-size -h at the mount point and df -h to compare actual disk usage and the sizes but for some reason du reports a value that is 4GB lower than the output of df.
I still don't know the cause of the difference as I later tried creating a second 12GB f2fs volume just to test things without any features and df was telling me that 5% was used even though the volume was just created.
EDIT: To compare things I copied the text file I mentioned to a ZFS dataset with LZ4 enabled. Using du -h and du --apparent-size -h on the file returned 4.5K and 137K respectively so compression is working as expected on ZFS. I did the same thing in the F2FS volume in question and --apparent-size returns 137K while plain du returns 140K. How is this even possible? Is F2FS's compression so transparent that you can't even verify that it works?
Thanks in advance.
Last edited by 08d09 (2021-01-05 20:00:31)
Offline
btrfs is the same: you can't see the compression with 'du', there's no change when adding "--apparent-size". For btrfs there's a program "compsize" that can show the real disk usage. Maybe for f2fs there's also a special program?
Offline
I still don't know the cause of the difference as I later tried creating a second 12GB f2fs volume just to test things without any features and df was telling me that 5% was used even though the volume was just created.
I expect that F2FS reserves 5% of the filesystem for root use only just like EXT does.
Offline
btrfs is the same: you can't see the compression with 'du', there's no change when adding "--apparent-size". For btrfs there's a program "compsize" that can show the real disk usage. Maybe for f2fs there's also a special program?
I saw compsize while searching for information too but unfortunately there doesn't seem to be a similar tool for F2FS. Search for "f2fs compression" and the most useful thing you'll get is the phoronix article regarding the addition of the feature.
I expect that F2FS reserves 5% of the filesystem for root use only just like EXT does.
I thought of reserved blocks too but there isn't a tune2fs alternative available for F2FS. Fortunately the sysfs entries are somewhat documented so I found these under /sys/fs/f2fs/nvme1n1p2
avg_vblocks dirty_segments gc_idle iostat_enable min_hot_blocks ram_thresh
batched_trim_sections discard_granularity gc_idle_interval iostat_period_ms min_ipu_util readdir_ra
cp_background_calls discard_idle_interval gc_max_sleep_time ipu_policy min_seq_blocks reclaim_segments
cp_foreground_calls encoding gc_min_sleep_time lifetime_write_kbytes min_ssr_sections reserved_blocks
cp_interval extension_list gc_no_gc_sleep_time main_blkaddr mounted_time_sec umount_discard_timeout
current_reserved_blocks features gc_pin_file_thresh max_small_discards moved_blocks_background unusable
data_io_flag free_segments gc_urgent max_victim_search moved_blocks_foreground
dir_level gc_background_calls gc_urgent_sleep_time migration_granularity node_io_flag
dirty_nats_ratio gc_foreground_calls idle_interval min_fsync_blocks ra_nid_pages
reserved_blocks is set to 0 and this is its description in the kernel documentation "This parameter indicates the number of blocks that f2fs reserves internally for root" so I'm not sure whether blocks are reserved for root but it's not like I got another explanation.
I also found a file called /sys/kernel/debug/f2fs/status that contains some relevant information.
Utilization: 14% (18022738 valid blocks)
- Node: 24239 (Inode: 6589, Other: 17650)
- Data: 17998499
- Inline_xattr Inode: 1449
- Inline_data Inode: 51
- Inline_dentry Inode: 123
- Compressed Inode: 1323, Blocks: 177748
- Orphan/Append/Update Inode: 0, 0, 0
This at least confirms that something is compressed but does anyone know how whether this data can be used to calculate the compression ratio? Also I forgot to specify the mount option 'compress_algorithm=lz4' before copying some files so that might be why if there's a difference between compressed inodes and total inodes.
Offline
I've not found a solution to this, either. Additionally, despite the "Compressed Inode" line from the debug status, I'm not convinced anything is actually compressed. My experience matches this reddit thread, complaining no effects from compression observed.
Per the wiki and the f2fs kernel documentation it looks like setting the `c` attribute on a file should compress the file and setting the `c` attribute on a directory should compress any new files created.
To enable compression on regular inode, there are three ways:
* chattr +c file
* chattr +c dir; touch dir/file
* mount w/ -o compress_extension=ext; touch file.ext
So I went ahead and ran
chattr -R +c
on a bunch of directories that compress really well when I make tar.zst backups, but no change in freespace before/after, so I don't think merely setting the attribute compresses. I also tried making a folder with the `c` attribute set and using `rsync -a` to copy a bunch of files into the new directory and freespace shrunk by the amount expected for no compression.
As a final test, I entered a directory that I've set the `c` attribute on and ran
dd if=/dev/zero of='10GB of zeros' bs=1M count=10240
and my free space decreases by 10GB. :-/
So yeah, idk. looking at `/proc/mounts`, I do have `compress_algorithm=zstd,compress_log_size=2,compress_mode=fs` in my mount options.
Offline
I'm not convinced anything is actually compressed.
Probably because f2fs doesn't actually let users utilize any space saved by compression, see:
https://lore.kernel.org/linux-f2fs-deve … oogle.com/
In particular:
In this series, we've implemented transparent compression experimentally. It
supports LZO and LZ4, but will add more later as we investigate in the field
more. At this point, the feature doesn't expose compressed space to user
directly in order to guarantee potential data updates later to the space.
Instead, the main goal is to reduce data writes to flash disk as much as
possible, resulting in extending disk life time as well as relaxing IO
congestion. Alternatively, we're also considering to add ioctl() to reclaim
compressed space and show it to user after putting the immutable bit.
Edit: Apparently they added the planned ioctl() already, see the updated kernel doc here:
https://www.kernel.org/doc/html/latest/ … /f2fs.html
Last edited by Ketsui (2021-12-03 06:48:59)
Offline
Ah! Thank you. I had found a couple of forum comments from people who speculated that, but hadn't found an authoritative document. Disappointing that compressed space can only be freed by making the file immutable, but at least it's an option. Performance and reducing flash wear are two of the reasons I wanted to enable compression; I had just hoped to save space, too.
Offline
There's support in f2fs_io (release_cblocks and reserve_cblocks subcommands). This is in a folder that has +c attr set.
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sdb7 357G 242G 115G 68% /
$ dd if=/dev/zero of=10GB bs=1M count=10240 status=progress
10712252416 bytes (11 GB, 10 GiB) copied, 25 s, 428 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 25.0641 s, 428 MB/s
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sdb7 357G 252G 105G 71% /
# free the unused reserved blocks
$ f2fs_io release_cblocks 10GB
1966047
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb7 357G 244G 113G 69% /
# as advertised, the file is immutable.
$ echo foo >>10GB
bash: echo: write error: Operation not permitted
# Reserving the space, file is writable again, but consumes the full 10GB again
$ f2fs_io reserve_cblocks 10GB
1966047
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sdb7 357G 252G 105G 71% /
$ echo foo >>10GB
$ tail 10GB
Offline
Nothing has worked for me. Apparently everything still needs to be set up, but I don't know what.
F2FS_IOC_RELEASE_COMPRESS_BLOCKS failed: Operation not supported
Offline
F2FS_IOC_RELEASE_COMPRESS_BLOCKS failed: Operation not supported
Operation not supported when releasing compress blocks indicates the filesystem wasn't formatted with the compression flag (I get a different error when I try it on a file that isn't compressed). You're supposed to be able to use fsck.f2fs to enable flags on existing file systems, but at least for compression I've always had to re-format to enable it. Review the compression section of the f2fs archwiki article
Offline
Pages: 1