I can't delete an empty directory even with
rm -rf foo
# btrfs subvol del foo
?
]]>Hi,
Two weeks ago I reinstalled my pc from scratch. Systemd all the way. The pc doubles as a desktop pc and mediaserver, and everything worked as planned.
Yesterday I installed mkvtoolnix and used it to strip some audio channels from a .mkv. After that, I used packer (AUR installer) to install the faenza and faience icon set. The faenza icon set installed correctly, but the faience icon set did not install because the downloaded package could not be expanded.
This could have two causes:
1. There is no space left on the device.
2. The device is mounted without write-privileges.I checked both:
1. with 'df -h' -> I have at least 2GB free on every partitition (NOTE: there is my 'media'-partition that is listed as 100% used, but it has 6GB out of 996GB free...)
2. with 'mount -a -o rw' -> /etc/mtab shows that all are mounted rwThis didn't help: pacman refused to install complaining it could not write any file inside the package
I rebooted to make sure it's not a problem with a temporary folder that had been flooded, and now gnome-session refuses to start. X starts, but Gnome/gdm gives me the sad faced computer that something has gone wrong. I can work in the console and checking both hypotheses results in the same conclusion: I have plenty space left and the (relevant) partitions are all mounted 'rw'.
Help?
/EDIT: more info:
* My rootfs is btrfs and systemd warns me that there is no fsck.btrfs, but also that I can safely ignore that warning
* gnome-session complains about I/O-errors when writing to /usr/share
* My SSD is 1.5 years old and I have never had problems before.
Hi, it's because the metadata in btrfs takes up all of your 2GB.
Btrfs is claimed to be clever enough to allocate inodes dynamically but actually it's not the case.
Try to run
btrfs filesystem balance /
Btrfs is far from stable. Today I just encountered another strange problem of btrfs. I can't delete an empty directory even with
rm -rf foo
And to top it off, I mounted the partition with:
# mount -o compress=lzo /dev/sdc1 /mnt
So of course it may not display it as accurately as it could.
Forget I said anything.
]]>I was very curious for btrfs, but if not knowing if I have free space on / is a known problem, then I guess it is not (yet) ready for primetime.
]]>$ btrfs filesystem df /media/eceac11a-a960-4150-913b-3ab1025fb349/ Data: total=1.99GB, used=1.00GB System, DUP: total=8.00MB, used=4.00KB System: total=4.00MB, used=0.00 Metadata, DUP: total=506.62MB, used=85.79MB Metadata: total=8.00MB, used=0.00
What does it even mean?
It's explained on the btrfs wiki page I linked, but yeah, I don't think it's as clear as it should be.
First value is data allocation, second value is data usage.
Here's mine for comparison:
wormzy@sakura[pts/1]~$ sudo btrfs fi show /dev/sda1
Label: 'Arch64-btrfs' uuid: 29034ff6-d2bf-469e-8c0e-82b701ee87d9
Total devices 2 FS bytes used 3.98GB
devid 1 size 20.00GB used 12.04GB path /dev/sda1
devid 2 size 20.00GB used 12.03GB path /dev/sdb1
wormzy@sakura[pts/1]~$ /bin/df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 40G 8.0G 31G 21% /
wormzy@sakura[pts/1]~$ btrfs fi df /
Data, RAID1: total=11.00GB, used=3.78GB
System, RAID1: total=32.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, RAID1: total=1.00GB, used=209.53MB
df says I'm using 8GB and have 31GB free. But since this is two 20GB partitions in RAID1, I don't have that much free space at all. btrfs reports that I've used 3.78GB of space with data, and has allocated 11GB for that (for now), 4KB on system data (information about the btrfs filesystem, I assume), and 209.53MB on Metadata (out of 1GB allocated). There's 8GB of unallocated space left in reserve, so if I fill up all my Metadata allotment, it'll get increased to 2GB. Likewise, if I fill up my Data allotment, that will get increased.
As the wiki states:
So, in general, it is impossible to give an accurate estimate of the amount of free space on any btrfs filesystem. Yes, this sucks.
But it should be possible to figure out where you stand from what I've just explained.
]]>$ btrfs filesystem df /media/eceac11a-a960-4150-913b-3ab1025fb349/
Data: total=1.99GB, used=1.00GB
System, DUP: total=8.00MB, used=4.00KB
System: total=4.00MB, used=0.00
Metadata, DUP: total=506.62MB, used=85.79MB
Metadata: total=8.00MB, used=0.00
What does it even mean? Dolphin sees it as 1.4 GiB, Nautilus as either 1.6 GB or 2.2 GB. Wut??
]]>Oh yes, thanks for correction
]]>btrfs filesystem df /
Hmm btrfs, maybe you should include that in the topic title...
Done...
]]>Two weeks ago I reinstalled my pc from scratch. Systemd all the way. The pc doubles as a desktop pc and mediaserver, and everything worked as planned.
Yesterday I installed mkvtoolnix and used it to strip some audio channels from a .mkv. After that, I used packer (AUR installer) to install the faenza and faience icon set. The faenza icon set installed correctly, but the faience icon set did not install because the downloaded package could not be expanded.
This could have two causes:
1. There is no space left on the device.
2. The device is mounted without write-privileges.
I checked both:
1. with 'df -h' -> I have at least 2GB free on every partitition (NOTE: there is my 'media'-partition that is listed as 100% used, but it has 6GB out of 996GB free...)
2. with 'mount -a -o rw' -> /etc/mtab shows that all are mounted rw
This didn't help: pacman refused to install complaining it could not write any file inside the package
I rebooted to make sure it's not a problem with a temporary folder that had been flooded, and now gnome-session refuses to start. X starts, but Gnome/gdm gives me the sad faced computer that something has gone wrong. I can work in the console and checking both hypotheses results in the same conclusion: I have plenty space left and the (relevant) partitions are all mounted 'rw'.
Help?
/EDIT: more info:
* My rootfs is btrfs and systemd warns me that there is no fsck.btrfs, but also that I can safely ignore that warning
* gnome-session complains about I/O-errors when writing to /usr/share
* My SSD is 1.5 years old and I have never had problems before.