You are not logged in.
UPDATE: the bug was fixed and no longer an issue. Or at least I haven't encountered it ever again.
Hello everyone, hopefully someone can point me in the right direction on how to fix and prevent in the future. Basically a BTRFS states that there is no more space left. Currently running a full balance just to see if it helps as it is mentioned in both btrfs wiki, arch wiki and others which helped temporarily. after a couple days, the same issue cropped up again. How do I get btrfs to stop running out of space. running balance every time is not really an option as it takes forever and partial balance doesn't seem to help only full balance of both data and metadata helped so far, but only temporary. Never had this problem with btrfs before and not sure what's causing it.
Installed a new 10TB drive (Seagate) - /dev/sda)/
Created 40 GB Swap - /dev/sda1
Created 10TB btrfs - /dev/sda2
Updated fstab (auto-mount files on boot)
Turned on Swap
Mounted btrfs to /files/
copy all files from old drive (mounted to /home/user/tmp_drive/ via usb SATA adapter
Reboot
Use handbrake to encode a movie, disk all of a sudden btrfs has no more space.
Run btrfs balance start /files/ in progress
Note: no snapshots enabled
btrfs fi usage /files
sudo btrfs fi usage /files :
Overall:
Device size: 9.05TiB
Device allocated: 2.42TiB
Device unallocated: 6.63TiB
Device missing: 0.00B
Used: 2.40TiB
Free (estimated): 6.65TiB (min: 3.33TiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:2.41TiB, Used:2.40TiB (99.35%)
/dev/sda2 2.41TiB
Metadata,DUP: Size:3.00GiB, Used:2.75GiB (91.61%)
/dev/sda2 6.00GiB
System,DUP: Size:8.00MiB, Used:288.00KiB (3.52%)
/dev/sda2 16.00MiB
Unallocated:
/dev/sda2 6.63TiB
btrfs fi show
sudo btrfs fi show
Label: none uuid: 503e1c5b-5d41-4e6a-8ef5-75678c730bc7
Total devices 1 FS bytes used 2.40TiB
devid 1 size 9.05TiB used 2.42TiB path /dev/sda2
btrfs fi df /files/
sudo btrfs fi df /files/
Data, single: total=2.41TiB, used=2.40TiB
System, DUP: total=8.00MiB, used=304.00KiB
Metadata, DUP: total=3.00GiB, used=2.74GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
btrfs subvolume show /files
sudo btrfs subvolume show /files/
/
Name: <FS_TREE>
UUID: c745d61b-ab32-4fcf-b0a5-7f0b522f3fd3
Parent UUID: -
Received UUID: -
Creation time: 2019-12-27 16:40:38 -0700
Subvolume ID: 5
Generation: 6621
Gen at creation: 0
Parent ID: 0
Top level ID: 0
Flags: -
Snapshot(s):
fstab
# /dev/nvme0n1p4
UUID=e1922aed-d337-47e6-a979-2f72f49e1a99 / ext4 rw,noatime 0 1
# /dev/nvme0n1p2
UUID=884926e4-f13d-40f0-9cea-c8868a62b2a7 /boot ext4 rw,noatime,data=ordered 0 2
# /dev/nvme0n1p1
UUID=2172-5FFB /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/nvme0n1p3
UUID=fc2a2e02-2d82-4214-b5ec-fa5e14d4f038 /home ext4 rw,noatime,data=ordered 0 2
# /dev/nvme1n1p9
UUID=cc7c9dcb-8bf3-4490-8c86-115c524f473e /var ext4 rw,noatime,data=ordered 0 2
# /dev/sdc2
UUID=503e1c5b-5d41-4e6a-8ef5-75678c730bc7 /files btrfs rw,nosuid,nodev,relatime,space_cache,subvolid=5,subvol=/ 0 0
# /dev/sda1
UUID=9ff1a32a-36bf-4dff-9663-c1dcaaafbc0f none swap defaults 0 0
sudo btrfs fi usage /files
sudo btrfs fi usage /files
Overall:
Device size: 9.05TiB
Device allocated: 2.56TiB
Device unallocated: 6.49TiB
Device missing: 0.00B
Used: 2.55TiB
Free (estimated): 6.49TiB (min: 3.25TiB)
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 512.00MiB (used: 0.00B)
Data,single: Size:2.56TiB, Used:2.55TiB (99.65%)
/dev/sda2 2.56TiB
Metadata,DUP: Size:3.00GiB, Used:2.76GiB (92.08%)
/dev/sda2 6.00GiB
System,DUP: Size:32.00MiB, Used:304.00KiB (0.93%)
/dev/sda2 64.00MiB
Unallocated:
/dev/sda2 6.49TiB
sudo btrfs fi df /files
sudo btrfs fi df /files/
Data, single: total=2.56TiB, used=2.55TiB
System, DUP: total=32.00MiB, used=304.00KiB
Metadata, DUP: total=3.00GiB, used=2.76GiB
GlobalReserve, single: total=512.00MiB, used=0.00B
sudo btrfs fi show
sudo btrfs fi show
Label: none uuid: 503e1c5b-5d41-4e6a-8ef5-75678c730bc7
Total devices 1 FS bytes used 2.55TiB
devid 1 size 9.05TiB used 2.56TiB path /dev/sda2
df -h
df -h
Filesystem Size Used Avail Use% Mounted on
dev 16G 0 16G 0% /dev
run 16G 2.1M 16G 1% /run
/dev/nvme1n1p4 138G 12G 120G 9% /
tmpfs 16G 53M 16G 1% /dev/shm
tmpfs 16G 0 16G 0% /sys/fs/cgroup
tmpfs 16G 143M 16G 1% /tmp
/dev/nvme1n1p2 2.0G 85M 1.8G 5% /boot
/dev/nvme1n1p9 209G 15G 184G 8% /var
/dev/nvme0n1p1 865G 153G 668G 19% /home
/dev/nvme1n1p1 286M 121M 165M 43% /boot/efi
/dev/sda2 9.1T 2.6T 0 100% /files
tmpfs 3.2G 18M 3.2G 1% /run/user/1000
/dev/sr0 91G 91G 0 100% /run/media/yevsey/APOCALYPSE_NOW_FINAL_CUT
Last edited by krutoileshii (2022-10-19 01:08:11)
Offline
It seems like a bug in BTRFS. AFAIK, there are patches and this issue likely to be fixed in next kernel releases.
Offline
Definitely seems like it. Do you by chance know which kernel version has the patches? I was going to compile it and give it a shot.
It seems like a bug in BTRFS. AFAIK, there are patches and this issue likely to be fixed in next kernel releases.
Offline
You could try linux 5.5-rc4 available using the linux-mainline package from AUR and Unofficial_user_repositories#miffe.
Offline
5.5-rc4 doesn't have the patch just yet. Doesn't look like it's been accepted just yet, I'm going to give it a week or so and if its still not in, just patch it manually and test. Since it only really causes issues with user-space and not actual data, i can way since everything is backed up anyway.
You could try linux 5.5-rc4 available using the linux-mainline package from AUR and Unofficial_user_repositories#miffe.
Offline
Did you partition it with fdisk or gdisk?
/dev/sdf1 3.7T 1.9T 1.8T 53% /media/4x2
/dev/sda1 2.8T 1.2T 1.6T 42% /media/backup2
/dev/sdd1 3.7T 1.9T 1.8T 53% /media/tv
/dev/sdb1 2.8T 1.1T 1.7T 41% /home
/dev/sde1 3.7T 1.8T 2.0T 48% /media/4x1
I don't have any 10TB, but several larger than 2.4. Those are all btrfs.
Not sure if it's relevant, just a guess.
Last edited by ronmon (2020-01-02 00:11:53)
Offline
Partitioned with gparted actually. But that's not the issue as i have anothe PC with same kernel version that was partitioned with gdisk exhibiting the same issue. currently in the middle of compiling the mainline with the above patch included manually to test it out. With all of the modules it compiles, even on 16 cores it's taking a bit of time. Once I get a chance to test it, i will report back which might be a few days as I have to write quite a bit to disk before it manifests itself.
btrfs utilities do show the correct size and available space, its the df that is getting an incorrect free space which causes issues until metadata is rebalanced again.
what kernel version are you on btw? I'm on 5.4.6 , which is where I started to see the issues. This wasn't there before.
sudo gdisk -l /dev/sda
GPT fdisk (gdisk) version 1.0.4
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Disk /dev/sda: 19532873728 sectors, 9.1 TiB
Model: ST10000VN0004-1Z
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): CBA08A5C-9491-42D2-9FB6-9884D52DE9A0
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 19532873694
Partitions will be aligned on 2048-sector boundaries
Total free space is 4029 sectors (2.0 MiB)
Number Start (sector) End (sector) Size Code Name
1 2048 97656831 46.6 GiB 8200
2 97656832 19532871679 9.0 TiB 8300
Did you partition it with fdisk or gdisk?
Last edited by krutoileshii (2020-01-02 00:15:51)
Offline
Use:
btrfs balance start -v -dusage=85 /files
If you let it default to 100%, it will run forever.
If it fails with 85, try something like 40, then 60, then 85.
Offline
Offline
I've used that but unfortunately nothing below 95 currently makes a difference which with 2.6tb of data takes a while. Finished compiling the kernel with the patch though so will give it a shot once this round of balancing finishes.
Use:
btrfs balance start -v -dusage=85 /files
If you let it default to 100%, it will run forever.
If it fails with 85, try something like 40, then 60, then 85.
Offline
Also, looks like similar issues that started at around 5.4.1 mailing list discussion
P.S. still running balance.. since metadata seems to be the issue might try to just balance that, should be faster.
Offline
Also, looks like similar issues that started at around 5.4.1 mailing list discussion
5.4.1, perhaps the issue was introduced with 5.4.
Offline
Quick update: booted into parched mainline 5.5-rc4. Wrote over 120GB with no issues, going to write another 120GB to see if it pops up. Patch seems promising, hopefully solves the issue and get backported.
Last edited by krutoileshii (2020-01-03 03:36:25)
Offline
You can revert to kernel 5.3.18 in the meantime, as a work around. The bug is actually more than one bug going back a while, but is exposed in 5.4 due to a new overcommit behavior for metadata. There are also patches you could apply if you prefer to use 5.4 series, but these patches may not be the final ones that'll be accepted for merge after testing.
https://patchwork.kernel.org/project/li … ies=208445
It's advised not to balance metadata, it can make the problem happen more often, and really shouldn't be necessary anyway. Explanation in this upstream post:
https://lore.kernel.org/linux-btrfs/CAN … ad03ca88f6
Offline
The metadata part was interesting read. As for rolling back, I'll just use patched mainline until the fix makes it into the kernel. The fact that I wrote over 2.6Tb in under a week likely increased my chances of hitting the issue.
Offline
Looks like my thread got redirected to this one - thanks all, I downgraded the kernel to 5.3.13.1 and ran a btrfs balance on the volume, all good so far. Will keep an eye out for when the patches hit stable release. Cheers,
Offline
Did run now twice into this issue after reinstalling 2 systems... But I fixed it by running "btrfs filesystem resize 1:max /home" and running btrfs scrub once. Not sure if it was necessary to delete some files, but I tried this as well, so don't know what fixed it finally for me.
Offline
Quick Update: Currently sitting on 5.4.10 and haven't experienced any issues just yet. So far wrote about 400GB of data, but need to keep an eye on it for a bit more. Hopefully patches make it and get backported.
Offline