You are not logged in.
Hello everyone,
Today I encountered an issue with my Arch Linux system running on Btrfs. While working (specifically during an exam), my system suddenly remounted the filesystem in read-only mode due to a corruption error. However, after running diagnostics, I couldn’t find any actual issues with the filesystem, nor the disk.
System Information:
Distro: Arch Linux (fully updated)
Kernel: 6.13.1-zen
Filesystem: Btrfs (on a single NVMe SSD, no RAID)
What Happened:
The system became read-only without any apparent cause.
Checking dmesg I found the following log:
[Fri Feb 7 19:02:58 2025] BTRFS info (device nvme0n1p3): bdev /dev/nvme0n1p3 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
Running
btrfs device stats /
also reported:
bdev /dev/nvme0n1p3 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
However, a full
btrfs scrub
completed successfully with no error reported.
Running
btrfs check --readonly --force /dev/nvme0n1p3
found no corruption.
smartctl
also reports no issues with the drive.
It seems also that the filesystem tree was regenerated at the next boot.
Possible Theories:
A transient issue with the Btrfs log tree that caused a false positive corruption report.
A kernel bug affecting Btrfs.
An inconsistency that was resolved by the log replay at mount time.
A temporary problem with the SSD firmware or cache that did not persist.
Questions:
Has anyone else experienced similar behavior?
Could this be a known issue with the kernel / Btrfs?
Are there any additional checks I should run to further diagnose the cause?
I’d appreciate any insights! Thanks in advance.
Last edited by fofe1123 (2025-02-07 20:49:14)
Offline
Trimming (discard mount option) or APST caused?
Are there any errors pertaining the nvme in the journal?
Offline
I've had this problem for a long time (over a year). Over time I've tried very many different solution candidates: so far nothing has worked. What makes this very hard for me to debug is the fact that my system is btrfs-only. So, when it happens: no logs...
Edit: I often see this upon resume.
Last edited by okram (2025-02-18 15:42:46)
Offline
It seems like stats never gets cleaned unless you run
btrfs device stats --reset /mnt/your-array
I got the information from here https://unix.stackexchange.com/question … trfs-devic
Edit: The system suddenly remounting as read-only when using btrfs is a sign that the hardware disk is failing according to a lot of people claiming to know about this issue.
Last edited by Mahtan (2025-08-18 17:53:34)
Offline
I have similar issue with my faulty dell inspiron laptop, where the hard drive will some time disconnect and the filesystem will become readonly and some command will show "input/output error", I have never able to run the scrub command in the senario due to the error above, but running scrub after give no error either
Offline
In btrfs, due to normal operations, some extents can become unreachable (e.g., bookend extents). If there is a corruption detected in this, then scrub won't show it because it only scans reachable data. The corruption is real (but has not yet affected reachable data), however, and points to either a disk or memory problem.
Last edited by topcat01 (2025-09-14 18:32:29)
Offline
I over heat my asus pc in my bag-pack due to (some suspend fail which probably caused by the Btrfs read only it self (funny cause I originally think it was the other way around))
jounalctl --list-boots https://gist.githubusercontent.com/iamb … tfile1.txt
the first boot with error
https://gist.githubusercontent.com/iamb … tfile1.txt
https://gist.githubusercontent.com/iamb … tfile1.txt
now it show
"EFI stub: WARNING: Failed to measure data for event 1: 0x800000000000000b" after plymouth and before dmcrypt in every boot
- the message apper out of place and dont show up in
/var/boot/log https://gist.githubusercontent.com/iamb … tfile1.txt
and btrfs read-only issue start to appear
and major slowdown
no error scrub
btrfs-check https://gist.githubusercontent.com/iamb … tfile1.txt
and running it a second time show a 200MB error file
it crash again and I got a giant qr https://gist.githubusercontent.com/iamb … tfile1.txt
the boot partition disappear once
install another new identical ssd and boot from it but the
"EFI stub: WARNING: Failed to measure data for event 1: 0x800000000000000b" still appear
boot in linux-lts always show "ASUS" logo when the warning normally appear in linux boot
Last edited by c>rust (Yesterday 07:36:04)
Offline
Why is that full of escape sequences?
Please post your complete system journal for the boot:
sudo journalctl -b | curl -F 'file=@-' 0x0.st
sudo btrfs check --force /dev/mapper/root 3.13.7 12:35
Opening filesystem to check...
WARNING: filesystem mounted, continuing because of --force
Have you somehow lost your mind? Don't operate on an open FS, let alone non-readonly btrfs-check
Offline
sorry I was editing when it my ASUS crash(again), I'm currently on my DELL, some of them are not final.
Last edited by c>rust (2025-10-15 09:20:56)
Offline
Ok, but whatever you do: no not run btrfs check on an open filesystem. Certainly not in writing mode. You're risking to make thins much MUCH worse.
Offline