You are not logged in.
Hi!
For some reason, btrfs-transaction writes a lot on my system. I don't know why, and any tips on why/how to find out why it does the writes, would be welcome.
The writes amount up to 0.1TB/week. After running iotop -a for 30 minutes:
Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s
Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 189.19 K/s
TID PRIO USER DISK READ DISK WRITE> SWAPIN IO COMMAND
372 be/4 root 1168.00 K 287.69 M ?unavailable? [btrfs-transaction]
3922 be/4 428679 2.15 M 79.91 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [RenderManager-0]
3923 be/4 428679 2.01 M 78.15 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [RenderManager-0]
933 be/4 root 1152.00 K 53.32 M ?unavailable? monitorix.pid
3087 be/4 428679 25.14 M 24.05 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~ags.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [RegionFile I/O]
485 be/4 root 688.00 K 15.55 M ?unavailable? [kworker/u64:11-events_power_efficient]
4179 be/4 root 208.00 K 11.80 M ?unavailable? [kworker/u64:0-events_unbound]
119 be/4 root 384.00 K 11.72 M ?unavailable? [kworker/u64:2-flush-btrfs-1]
4540 be/4 root 512.00 K 11.36 M ?unavailable? [kworker/u64:7-btrfs-endio-write]
4387 be/4 root 48.00 K 10.45 M ?unavailable? [kworker/u64:4-btrfs-endio-write]
4502 be/4 root 0.00 B 10.38 M ?unavailable? [kworker/u64:6-btrfs-endio-write]
483 be/4 root 48.00 K 10.20 M ?unavailable? [kworker/u64:9-btrfs-endio-write]
5939 be/4 root 256.00 K 8.59 M ?unavailable? [kworker/u64:10-btrfs-endio-write]
5937 be/4 root 352.00 K 7.02 M ?unavailable? [kworker/u64:3-events_power_efficient]
3071 be/4 428679 180.00 K 5.59 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~lags.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [Server thread]
3800 be/4 428679 1872.00 K 3.42 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [spark-async-sam]
423 be/4 root 0.00 B 2.76 M ?unavailable? systemd-journald
3910 be/4 428679 0.00 B 2.24 M ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [BlueMap-Plugin-]
3909 be/4 428679 0.00 B 1260.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~/mcflags.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [Thread-15]
3771 be/4 428679 0.00 B 1008.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [DimensionDataIO]
6583 be/4 root 16.00 K 608.00 K ?unavailable? [kworker/u64:1-btrfs-endio-write]
6487 be/4 ville 0.00 B 516.00 K ?unavailable? -zsh
3778 be/4 428679 0.00 B 504.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [DimensionDataIO]
3780 be/4 428679 0.00 B 504.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [DimensionDataIO]
6342 be/4 ville 0.00 B 88.00 K ?unavailable? -zsh
746 be/4 ntp 0.00 B 84.00 K ?unavailable? ntpd -g -u ntp:ntp
5622 be/4 ville 0.00 B 72.00 K ?unavailable? -zsh
5904 be/4 428679 0.00 B 28.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [BlueMap-RegionF]
3022 be/4 428679 0.00 B 8.00 K ?unavailable? java -Xms5G -Xmx10G -XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPaus~gs.emc.gs -Daikars.new.flags=true -jar ./paper.jar nogui [Log4j2-AsyncApp]
967 be/4 minecraf 0.00 B 4.00 K ?unavailable? conmon --api-version 1 -c 7ffbdef99434e49996067392b1170b6ac76c59faa209642e~mmand-arg 7ffbdef99434e49996067392b1170b6ac76c59faa209642e4120bfa83c918b8d
2983 be/4 428679 0.00 B 4.00 K ?unavailable? cat
It seems btrfs-transactions rises with approximately 6M every 30 seconds, which is the commit interval. I consider this exessive.
Currently, the computer is running a minecraft server in a podman container and monitorix - and not much else.
Writes done by btrfs-transaction are more than everything else combined (by roughly eyeballing it). Monitorix writes quite a lot into it's rrd databases (I could put these in tmpfs and sync them with anything-sync-daemon, I suppose)
My btrfs mounts are:
❯ mount | grep -i btrfs
/dev/nvme0n1p2 on / type btrfs (rw,relatime,lazytime,ssd,discard=async,space_cache=v2,subvolid=272,subvol=/@)
/dev/nvme0n1p2 on /home type btrfs (rw,noatime,lazytime,ssd,discard=async,space_cache=v2,subvolid=259,subvol=/@home)
/dev/nvme0n1p2 on /mnt/fatrace type btrfs (rw,relatime,lazytime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/)
I've mounted "/" subvol in /mnt/fatrace exactly to find out what is doing the writes, I get the same behavior without "/" mounted (fatrace breaks if btrfs "/" subvolume i.e. subvolid=5 is not mounted).
Cheers!
Offline
>>relatime,lazytime,
I propose to start here: switch to noatime
Read: https://wiki.archlinux.org/title/Fstab#atime_options
>>discard=async
I propose to switch to nodiscard, and use instead periodic TRIM.
Read: https://wiki.archlinux.org/title/Solid_ … iodic_TRIM
and
Offline
Hi,
Thanks ua4000 for your reply.
Your solutions have negligible effect in my case; noatime was already enabled for /home, and root has negligible writes anyways EDIT: I brainfarted, not really relevant here. I also believe discard should not affect the number of writes, however I will disable here (and report if it has any effect).
I've red the section in the last link. iotop is not useful as it just shows the btrfs-transactions doing the writes. Is the observation I'm having just a feature of btrfs? If so, why/what is it actually writing to the disk?
I also understand my SSD is probably fine. It's still a bit weird to have so much writes for no reason.
EDIT: Preliminary test seem to point out it was indeed the problem with iops_limit and setting nodiscard is a workaround. I'm not sure why, I didn't (yet) read the upstream bug report.
EDIT: Setting nodiscard for the mounts and noatime had no effect, btrfs-transactions is still trashing away on the disc...
Last edited by Wild Penguin (2024-10-17 20:43:28)
Offline