You are not logged in.
Hello everyone, I'm trying to find out and fix the issue that is causing my system to be laggy.
I've noticed that running docker build, on a specific project I have, will reliably cause the unresponsiveness.
Another way for me to reproduce the freeze is to use plasma-pass and quickly type stuff in its search bar; after a while my whole DE becomes unresponsive.
I've tested whether docker build will reproduce the issue when booting into multi-user.target and it does. I've tested this by launching editors like vim and helix and observing that they struggled to open; it took them a couple seconds to open instead of the usual instantaneous initialization.
For me this was enough to conclude that it was not necessarily DE-related and so I moved on to benchmarking.
Below are some results reported by fio:
❮ sudo fio --filename=~/test.fio --name=test-randwrite \
--size=4G --rw=randwrite --bs=4k \
--ioengine=io_uring --iodepth=256 --numjobs=4 \
--time_based --runtime=60 --direct=1 --group_reporting --unlink=1
test-randwrite: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=io_uring, iodepth=256
...
fio-3.41
Starting 4 processes
test-randwrite: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [_(1),f(1),_(2)][100.0%][w=6880KiB/s][w=1720 IOPS][eta 00m:00s]
test-randwrite: (groupid=0, jobs=4): err= 0: pid=15568: Fri Nov 7 11:24:58 2025
write: IOPS=11.6k, BW=45.2MiB/s (47.4MB/s)(2726MiB/60314msec); 0 zone resets
slat (nsec): min=491, max=186690, avg=2018.15, stdev=2158.93
clat (msec): min=8, max=25307, avg=88.48, stdev=969.37
lat (msec): min=8, max=25307, avg=88.49, stdev=969.37
clat percentiles (msec):
| 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 10],
| 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12],
| 70.00th=[ 13], 80.00th=[ 18], 90.00th=[ 268], 95.00th=[ 380],
| 99.00th=[ 506], 99.50th=[ 542], 99.90th=[17113], 99.95th=[17113],
| 99.99th=[17113]
bw ( KiB/s): min= 280, max=384992, per=100.00%, avg=76143.16, stdev=31865.53, samples=293
iops : min= 70, max=96248, avg=19035.78, stdev=7966.38, samples=293
lat (msec) : 10=30.48%, 20=52.31%, 50=6.21%, 100=0.09%, 250=0.29%
lat (msec) : 500=9.55%, 750=0.85%, 1000=0.07%, >=2000=0.15%
cpu : usr=0.49%, sys=1.14%, ctx=698002, majf=0, minf=33
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=0,697942,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=256
Run status group 0 (all jobs):
WRITE: bw=45.2MiB/s (47.4MB/s), 45.2MiB/s-45.2MiB/s (47.4MB/s-47.4MB/s), io=2726MiB (2859MB), run=60314-60314msec~
❮ sudo fio --filename=~/test.fio --name=test-randread \
--size=4G --rw=randread --bs=4k \
--ioengine=io_uring --iodepth=256 --numjobs=4 \
--time_based --runtime=60 --direct=1 --group_reporting --unlink=1
test-randread: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=io_uring, iodepth=256
...
fio-3.41
Starting 4 processes
test-randread: Laying out IO file (1 file / 4096MiB)
Jobs: 4 (f=4): [r(4)][100.0%][r=1629MiB/s][r=417k IOPS][eta 00m:00s]
test-randread: (groupid=0, jobs=4): err= 0: pid=16990: Fri Nov 7 11:29:57 2025
read: IOPS=302k, BW=1178MiB/s (1235MB/s)(69.0GiB/60003msec)
slat (nsec): min=773, max=5174.5k, avg=6608.80, stdev=9158.92
clat (usec): min=101, max=255249, avg=3387.86, stdev=3418.58
lat (usec): min=120, max=255253, avg=3394.47, stdev=3418.86
clat percentiles (usec):
| 1.00th=[ 898], 5.00th=[ 1483], 10.00th=[ 1795], 20.00th=[ 2073],
| 30.00th=[ 2278], 40.00th=[ 2474], 50.00th=[ 2704], 60.00th=[ 3064],
| 70.00th=[ 3589], 80.00th=[ 4490], 90.00th=[ 5932], 95.00th=[ 6849],
| 99.00th=[ 9503], 99.50th=[ 10552], 99.90th=[ 15008], 99.95th=[ 62129],
| 99.99th=[177210]
bw ( MiB/s): min= 554, max= 1696, per=99.71%, avg=1174.82, stdev=102.13, samples=476
iops : min=141956, max=434376, avg=300753.08, stdev=26144.96, samples=476
lat (usec) : 250=0.01%, 500=0.11%, 750=0.43%, 1000=0.87%
lat (msec) : 2=15.27%, 4=58.67%, 10=23.93%, 20=0.64%, 50=0.01%
lat (msec) : 100=0.05%, 250=0.02%, 500=0.01%
cpu : usr=5.41%, sys=44.97%, ctx=4131514, majf=0, minf=38
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=18097736,0,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=256
Run status group 0 (all jobs):
READ: bw=1178MiB/s (1235MB/s), 1178MiB/s-1178MiB/s (1235MB/s-1235MB/s), io=69.0GiB (74.1GB), run=60003-60003msec~
❮ sudo fio --filename=~/test.fio --name=test-randrw \
--size=4G --rw=randrw --bs=4k \
--ioengine=io_uring --iodepth=256 --numjobs=4 \
--time_based --runtime=60 --direct=1 --group_reporting --unlink=1
test-randrw: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=io_uring, iodepth=256
...
fio-3.41
Starting 4 processes
test-randrw: Laying out IO file (1 file / 4096MiB)
Jobs: 4 (f=4): [m(4)][100.0%][r=2528KiB/s,w=2516KiB/s][r=632,w=629 IOPS][eta 00m:00s]
test-randrw: (groupid=0, jobs=4): err= 0: pid=19772: Fri Nov 7 11:37:37 2025
read: IOPS=847, BW=3391KiB/s (3472kB/s)(200MiB/60447msec)
slat (usec): min=2, max=705, avg=61.06, stdev=62.17
clat (usec): min=32, max=692564, avg=4314.00, stdev=17146.29
lat (usec): min=126, max=692596, avg=4375.06, stdev=17142.82
clat percentiles (usec):
| 1.00th=[ 190], 5.00th=[ 245], 10.00th=[ 281], 20.00th=[ 318],
| 30.00th=[ 367], 40.00th=[ 392], 50.00th=[ 429], 60.00th=[ 498],
| 70.00th=[ 603], 80.00th=[ 775], 90.00th=[ 1012], 95.00th=[ 56361],
| 99.00th=[ 91751], 99.50th=[ 94897], 99.90th=[115868], 99.95th=[168821],
| 99.99th=[463471]
bw ( KiB/s): min= 80, max=49600, per=100.00%, avg=3760.37, stdev=1130.84, samples=436
iops : min= 20, max=12400, avg=940.09, stdev=282.71, samples=436
write: IOPS=854, BW=3420KiB/s (3502kB/s)(202MiB/60447msec); 0 zone resets
slat (nsec): min=455, max=223943, avg=6752.31, stdev=5758.01
clat (msec): min=14, max=8547, avg=1192.86, stdev=1092.50
lat (msec): min=14, max=8547, avg=1192.86, stdev=1092.50
clat percentiles (msec):
| 1.00th=[ 58], 5.00th=[ 88], 10.00th=[ 96], 20.00th=[ 995],
| 30.00th=[ 1003], 40.00th=[ 1045], 50.00th=[ 1062], 60.00th=[ 1083],
| 70.00th=[ 1250], 80.00th=[ 1334], 90.00th=[ 1620], 95.00th=[ 1687],
| 99.00th=[ 8288], 99.50th=[ 8423], 99.90th=[ 8490], 99.95th=[ 8490],
| 99.99th=[ 8557]
bw ( KiB/s): min= 32, max=43792, per=100.00%, avg=3718.09, stdev=989.55, samples=436
iops : min= 8, max=10948, avg=929.52, stdev=247.39, samples=436
lat (usec) : 50=0.01%, 100=0.01%, 250=2.75%, 500=27.33%, 750=9.10%
lat (usec) : 1000=5.49%
lat (msec) : 2=1.83%, 4=0.09%, 10=0.51%, 20=0.04%, 50=0.35%
lat (msec) : 100=7.62%, 250=0.31%, 500=0.27%, 750=0.45%, 1000=8.61%
lat (msec) : 2000=34.22%, >=2000=1.03%
cpu : usr=0.27%, sys=1.06%, ctx=100599, majf=0, minf=42
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
issued rwts: total=51238,51676,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=256
Run status group 0 (all jobs):
READ: bw=3391KiB/s (3472kB/s), 3391KiB/s-3391KiB/s (3472kB/s-3472kB/s), io=200MiB (210MB), run=60447-60447msec
WRITE: bw=3420KiB/s (3502kB/s), 3420KiB/s-3420KiB/s (3502kB/s-3502kB/s), io=202MiB (212MB), run=60447-60447msecMy model, as reported by nvme, is Micron MTFDKCD512QFM-1BD1AABLA and I found some 3rd party benchmarks here . The online benchmarks say the randread should be about 2107 MB/s and the randwrite should be 1606 MB/s but my local benchmarks show poorer performance, and especially poor performance for randwrite (only 45 MB/s). The results of randrw are also really bad; I've run randrw multiple times and the results across all runs are abysmally low.
I'm confident the freezes are related to this NVMe drive because even when starting these benchmarks my system becomes unresponsive until stats start being displayed by fio. So I guess the initialization that fio is doing would be worth examining since it is causing a freeze (any suggestions on how this could be done?).
These are my block devices:
~
➜ sudo lsblk -f -o NAME,FSTYPE,FSVER,TYPE,FSAVAIL,FSUSE%
NAME FSTYPE FSVER TYPE FSAVAIL FSUSE%
nvme0n1 disk
├─nvme0n1p1 vfat FAT32 part 626,3M 39%
└─nvme0n1p2 crypto_LUKS 2 part
└─system_crypt LVM2_member LVM2 001 crypt
├─vg_system-swap swap 1 lvm
└─vg_system-root btrfs lvm 150,7G 66%
nvme1n1 crypto_LUKS 2 disk
└─storage_crypt LVM2_member LVM2 001 crypt
├─vg_storage-lfs btrfs lvm
└─vg_storage-storage1 btrfs lvm 168,6G 75%nvme0n1 is the drive I benchmarked
Do you think this is a configuration issue or hardware malfunction?
Last edited by susd (2025-11-09 19:27:25)
Offline
nvme0n1 is the drive I benchmarked
You mean "LUKS => LVM => btrfs"
So I guess the initialization that fio is doing would be worth examining since it is causing a freeze
dd if=/dev/urandom of=~/test.fio bs=4k count=1MDoes that freeze anything?
It's gonna be the IO itself and it would be great to test w/o LUKS/LVM but there's only the 1GB boot partition - and you rather don't want to do any low-level stuff on a LUKS device ![]()
Offline
Sorry for the late reply.
You mean "LUKS => LVM => btrfs"
Yes
Does that freeze anything?
Yes, it makes the KDE system tray unresponsive.
Results:
sudo dd if=/dev/urandom of=~/test.fio bs=4k count=1M
1048576+0 records in
1048576+0 records out
4294967296 bytes (4,3 GB, 4,0 GiB) copied, 9,55292 s, 450 MB/sOffline
It's gonna be the IO itself and it would be great to test w/o LUKS/LVM but there's only the 1GB boot partition - and you rather don't want to do any low-level stuff on a LUKS device
cryptsetup benchmark is unsuspicious?
You might benefit from https://wiki.archlinux.org/title/Dm-cry … ives_(SSD) but MAKE SURE TO READ THE ARTICLE AND LINKED REFERENCES.
Offline
I'll mark this as solved because I'm confident it is a hardware issue.
I tested both my NVMe drives under the same circumstances. I've live-booted into archlinux via a usb-stick, created new lvm volumes (ext4 fs) for each NVMe drive and benchmarked them with fio.
Below I posted these results, the test used was randwrite on a 4G file.


cryptsetup benchmark looks fine
# Tests are approximate using memory only (no storage IO).
PBKDF2-sha1 3847985 iterations per second for 256-bit key
PBKDF2-sha256 7825194 iterations per second for 256-bit key
PBKDF2-sha512 2304562 iterations per second for 256-bit key
PBKDF2-ripemd160 1134822 iterations per second for 256-bit key
PBKDF2-whirlpool 953250 iterations per second for 256-bit key
argon2i 7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
argon2id 7 iterations, 1048576 memory, 4 parallel threads (CPUs) for 256-bit key (requested 2000 ms time)
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 1502,9 MiB/s 6092,3 MiB/s
serpent-cbc 128b 103,7 MiB/s 744,6 MiB/s
twofish-cbc 128b 224,5 MiB/s 492,9 MiB/s
aes-cbc 256b 1145,5 MiB/s 5062,8 MiB/s
serpent-cbc 256b 104,0 MiB/s 750,4 MiB/s
twofish-cbc 256b 224,5 MiB/s 492,9 MiB/s
aes-xts 256b 7470,0 MiB/s 7461,2 MiB/s
serpent-xts 256b 661,8 MiB/s 684,3 MiB/s
twofish-xts 256b 458,4 MiB/s 462,8 MiB/s
aes-xts 512b 6761,5 MiB/s 6750,0 MiB/s
serpent-xts 512b 660,8 MiB/s 683,2 MiB/s
twofish-xts 512b 458,3 MiB/s 462,8 MiB/sOffline