You are not logged in.
Hi,
hope this is the right place for problems with Pacman.
The space check is activated in the configuration file, but nevertheless it allows me to proceed even though not enough space is available (apparently every time).
It unfortunately blows up the system when the root partition runs out of memory, kernel files are gone, package files are lost (software cannot start anymore) etc. It basically happens everytime when there is not enough space for the whole update and ever since, I tried to check available space manually and clean space (pacman cache and some snapshots) before doing the system update.
I don't know the cause. I did not find debug log messages in either `/var/log/pacman.log` or `journalctl`. I could try to turn on the debug log in the config, but maybe a Pacman-savvy person knows the solution to this problem already?
Hint: I use BTRFS for the root and I experienced this issue after I moved the swap partition (originally it was after my root partition) using the KDE partition manager. This process did not work out so well. After startup issues, I had to fix `/etc/fstab` and because I originally wanted to enlarge the root, I also executed `btrfs filesystem resize max /` so that the resize operation, that I previously applied in KDE partition, took effect when using `btrfs`.
Thanks in advance.
PS: here is some `df` output
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/nvme0n1p7 50483200 26418392 21868584 55% /
devtmpfs 4096 0 4096 0% /dev
tmpfs 16299292 10316 16288976 1% /dev/shm
efivarfs 192 88 100 48% /sys/firmware/efi/efivars
tmpfs 6519720 2052 6517668 1% /run
/dev/nvme0n1p7 50483200 26418392 21868584 55% /root
/dev/nvme0n1p7 50483200 26418392 21868584 55% /srv
/dev/nvme0n1p7 50483200 26418392 21868584 55% /var/cache
/dev/nvme0n1p7 50483200 26418392 21868584 55% /var/tmp
/dev/nvme0n1p7 50483200 26418392 21868584 55% /var/log
/dev/nvme0n1p8 293888000 102326296 190230424 35% /home
tmpfs 16299292 5316 16293976 1% /tmp
/dev/nvme0n1p6 512000 580 511420 1% /boot/efi
tmpfs 3259856 176 3259680 1% /run/user/1000
parted -l
output:
Model: Lexar SSD NM6A1 1TB (nvme)
Disk /dev/nvme0n1: 1024GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 106MB 105MB fat32 EFI system partition boot, esp, no_automount
2 106MB 123MB 16,8MB Microsoft reserved partition msftres, no_automount
3 123MB 108GB 108GB ntfs Basic data partition msftdata
9 108GB 112GB 4194MB linux-swap(v1) swap
7 134GB 186GB 51,7GB btrfs
6 186GB 187GB 525MB fat32 boot, esp
8 187GB 488GB 301GB btrfs
4 1024GB 1024GB 629MB ntfs Basic data partition hidden, diag, no_automount
Model: Unknown (unknown)
Disk /dev/zram0: 33,4GB
Sector size (logical/physical): 4096B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0,00B 33,4GB 33,4GB linux-swap(v1)
Last edited by Elmar (2023-12-02 19:02:36)
Offline
You're using 26GB and have 21GB free on the root partition - what makes you believe that this is a disk space issue?
Except if the information is bogus: https://wiki.archlinux.org/title/Btrfs# … free_space
Otherwise smells more like a btrs issue or too aggressive trimming or APST or windows being hibernated.
https://wiki.archlinux.org/title/Solid_state_drive#TRIM
https://wiki.archlinux.org/title/Solid_ … leshooting
And the 3rd link below. Mandatory.
Disable it (it's NOT the BIOS setting!) and reboot windows and linux twice for voodo reasons.
Hint: I use BTRFS for the root and I experienced this issue after I moved the swap partition (originally it was after my root partition) using the KDE partition manager. This process did not work out so well. After startup issues, I had to fix `/etc/fstab` and because I originally wanted to enlarge the root, I also executed `btrfs filesystem resize max /` so that the resize operation, that I previously applied in KDE partition, took effect when using `btrfs`.
https://wiki.archlinux.org/title/Btrfs#btrfs_check (don't "--repair", but at least see whether btrfs wants to complain about anything)
How exactly did you "move" the swap? There's (now) swap at before the root partition (or rather: a 22GB gap) - did you move the root partition?
Offline
Thank you for your answer.
Please don't mind the memory usage. I needed to clean the memory (which approximately halves the usage) when I wanted to repeat the failed system update. (My partition steadily fills over time with new regular updates. Now I deployed a pre-transaction hook to automatically clean paccache.)
The diagnosis is easy, because it tells me, there is not enough memory left. Then there are many errors while processing the packages and the root partition is indeed filled, no byte left.
When I am not sure, I typically execute
btrfs filesystem usage /
before the update to check for remaining space.
Yes, my thought was, the space computation could be wrong for my specific case, somehow, and likely related to my resized BTRFS. I moved the swap partition to before the root partition (leaving it between windows and root) to obtain extra space for the root partition.
I did not dare to move the root partition though.
I'd assume Windows is not hibernated because I get write access to the Windows partition from Dolphin (which did not work when it was hibernated once in the past).
This is the output of
btrfs check /dev/nvme0n1p7
Checking filesystem on /dev/nvme0n1p7
UUID: 11d26dd1-3904-490b-9e30-1b11c6731dc0
[1/7] checking root items
[2/7] checking extents
[3/7] checking free space tree
[4/7] checking fs roots
[5/7] checking only csums items (without verifying data)
[6/7] checking root refs
[7/7] checking quota groups skipped (not enabled on this FS)
found 26425634816 bytes used, no error found
total csum bytes: 24721944
total tree bytes: 974159872
total fs tree bytes: 881901568
total extent tree bytes: 57901056
btree space waste bytes: 162661935
file data blocks allocated: 33795923968
referenced 50298916864
I used `--force` to check the mounted partition. Do you mean, I should boot from Live USB and unmount the root partition for the btrfs check?
This is the output of
btrfs filesystem usage /
Overall:
Device size: 48.14GiB
Device allocated: 44.77GiB
Device unallocated: 3.38GiB
Device missing: 0.00B
Device slack: 0.00B
Used: 25.52GiB
Free (estimated): 20.88GiB (min: 19.19GiB)
Free (statfs, df): 20.88GiB
Data ratio: 1.00
Metadata ratio: 2.00
Global reserve: 87.14MiB (used: 0.00B)
Multiple profiles: no
Data,single: Size:41.20GiB, Used:23.70GiB (57.53%)
/dev/nvme0n1p7 41.20GiB
Metadata,DUP: Size:1.75GiB, Used:929.00MiB (51.84%)
/dev/nvme0n1p7 3.50GiB
System,DUP: Size:32.00MiB, Used:16.00KiB (0.05%)
/dev/nvme0n1p7 64.00MiB
Unallocated:
/dev/nvme0n1p7 3.38GiB
The individual memory parts at least sum up to the 48GiB of device size.
FS Trimming is enabled according to the journalctl and ran last night (only after I fixed the incident), directly at the beginning of December. Trimming only happened last night, not the few days before. I assume, it trims weekly or monthly.
Regarding a potential device failure due to APST, I did not notice any other problems than the one with the system update that is able to run out of memory, so the NVMe never actually got unusable from a user perspective. I never experienced failure after suspend operation either.
Last edited by Elmar (2023-12-01 23:23:50)
Offline
I'd assume Windows is not hibernated because
=> Check.
But the bottom line is that you actually were out of disk space (just not reflected by the posted data) but the CheckSpace approximation miscalculated that?
Snapshots will prevent space from being re-used, see https://bugs.archlinux.org/task/65705 ?
Offline
Did I miss it or have we not gotten an actual, exact error that pacman is throwing? Memory and storage tend to refer to different things, I don't know why you're looking at storage if the error is about memory.
Offline
Oh man, the link you gave me, could be the reason. I am using Snapper snapshots (I almost had mentioned to clean snapshots besides pacman cache) and it makes one pre- and post-snapshot before the update.
I looked at the Pacman code and saw the space computation by myself but I could not know this:
"pacman estimates the disk space requirements under the assumption that deleting a file reclaims the space occupied by the file. That's perfectly sound, unless snapshots are taken. In a setup like the one used by snap-pac ["Pacman hooks that use snapper", snap-pac(8)], which creates pre- and post-upgrade snapshots on btrfs partitions, deletion of a file does *not* free any space; this only happens once the snapshot is deleted."
That's possibly a reason why I can free up quite some "storage" when deleting snapshots. Yeah, I mean storage, not working memory. My working memory has 32GB. (I am sorry, in German "storage" and "memory" use the same word so I forgot the distinction.) When looking into the `/var/log/pacman.log` I see errors of this kind:
error: could not extract /usr/share/fonts/signika-negative-sc/SignikaNegativeSC-Bold.ttf (Write failed)
An insufficiency message apparently is not logged. I just knew the underlying problem already because it happened so often in the past and because the root partition is full.
Regarding Windows, I remember that I turned off hibernation longer ago and always shut down completely because otherwise I cannot write-access the NTFS partition with Linux (I rarely boot Windows actually).
Well, okay, this could mean I need to remove snapshots more often to prevent storage to fill up. Maybe I can find a way to check the condition with a shell command. I also could make a custom Pacman version for myself but I am not willing to do that.
Thank you for your help :-) .
Last edited by Elmar (2023-12-02 11:39:48)
Offline
Please always remember to mark resolved threads by editing your initial posts subject - so others will know that there's no task left, but maybe a solution to find.
Thanks.
Offline