You are not logged in.
I experienced a drastic slowdown in reading files from the SSD and also legacy hard disks with LUKS after the upgrade to linux 5.* (every version).
Already at booting I can see that even mounting the LUKS partitions takes considerably more time (around a second) than before (it was instant).
root@HomeC ~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 298,1G 0 disk
├─sda1 8:1 0 3G 0 part /boot
└─sda2 8:2 0 295,1G 0 part
└─cryptcommon 254:1 0 295,1G 0 crypt /Common
sdb 8:16 0 1,8T 0 disk
└─sdb1 8:17 0 1,8T 0 part
└─crypthome 254:2 0 1,8T 0 crypt /home
sdc 8:32 0 59,6G 0 disk
└─sdc1 8:33 0 59,6G 0 part
└─cryptroot 254:0 0 59,6G 0 crypt /
Then my whole desktop experience becomes a pain, logging in takes ages (about ten times longer than before), starting any program lags painfully too.
journalctl --follow shows nothing unusual, no errors whatsoever.
Downgrading to linux 4.* helps, the system returns to its original snappy behavior.
I searched on google and here on the forum for "linux 5 slow lvm" but that did not bring any clues.
I don't know what other information I could give now, please help me.
UPDATE: s|LVM|LUKS|g
Last edited by SanskritFritz (2019-04-18 22:31:42)
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
Possibly related https://bbs.archlinux.org/viewtopic.php?id=244699
Offline
I just realised that I totally mixed up LVM with LUKS. My problem is with LUKS, don't even have LVM running. I rewrite my first post, sorry for the confusion.
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
`dmesg | grep crypt` gives you `device-mapper: crypt: xts(aes) using implementation "xts-aes-aesni"`? It's possible for slow (software) aes to be used if the aesni module is not loaded in time, and loading the module later does not change it.
otherwise can you confirm the device is performant when reading it directly w/o going through luks? there were some reports regarding slow I/O in 5.x kernels in general, no idea if there is a solution. (it works fine for me)
anything else in dmesg?
Last edited by frostschutz (2019-04-01 08:41:48)
Offline
[ 21.048626] device-mapper: crypt: xts(aes) using implementation "xts(ecb(aes-asm))"
[ 25.431059] device-mapper: crypt: xts(aes) using implementation "xts(ecb(aes-asm))"
The only non encrypted partition is the /boot partition. How can I test the IO performance?
On the other hand, I downgraded linux at my workplace too which became snappier too. I don't use encryption on my work computer which resides on a VMWare virtual machine.
So it seems to me you were right and this is a general IO slowdown.
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
Does the cpu support aes
lscpu | grep --color aes
If you check the kernel messages in the journal for a 4.20 boot what aes implementation is that using? Is the slowdown present on linux-lts?
Offline
> Does the cpu support aes
No:
Model name: Intel(R) Core(TM)2 CPU 6420 @ 2.13GHz
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm lahf_lm pti tpr_shadow dtherm
> If you check the kernel messages in the journal for a 4.20 boot what aes implementation is that using?
ápr 01 23:21:43 HomeC systemd-cryptsetup[362]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/f63453a3-449b-412d-86c7-1e1f39294c02.
ápr 01 23:21:43 HomeC systemd-cryptsetup[363]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/571b3538-aad6-4cb5-9d72-27040b4d557d.
> Is the slowdown present on linux-lts?
Probably not, the problem has definitely started with linux 5.
I haven't tried linux-lts though as it is not installed.
linux-zen which is also installed has the same problem.
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
Please test the following under 5.0 and 4.20 or 4.19 to see if the raw speed of AES has significantly changed.
cryptsetup benchmark --cipher aes-xts
Offline
you could also try disable spectre and the like ( https://wiki.ubuntu.com/SecurityTeam/Kn … onControls is there a similar page in arch wiki? )
# check vulnerabilities and active mitigations
head /sys/devices/system/cpu/vulnerabilities/*
Offline
Please test the following under 5.0 and 4.20 or 4.19 to see if the raw speed of AES has significantly changed.
It gives roughly the same value:
4.20:
root@HomeC ~> cryptsetup benchmark --cipher aes-xts
# Tests are approximate using memory only (no storage IO).
# Algorithm | Key | Encryption | Decryption
aes-xts 256b 101,4 MiB/s 93,0 MiB/s
5.0:
root@HomeC ~> cryptsetup benchmark --cipher aes-xts
# Tests are approximate using memory only (no storage IO).
# Algorithm | Key | Encryption | Decryption
aes-xts 256b 101,2 MiB/s 99,6 MiB/s
@frostschutz:
root@HomeC ~# head /sys/devices/system/cpu/vulnerabilities/*
==> /sys/devices/system/cpu/vulnerabilities/l1tf <==
Mitigation: PTE Inversion; VMX: EPT disabled
==> /sys/devices/system/cpu/vulnerabilities/meltdown <==
Mitigation: PTI
==> /sys/devices/system/cpu/vulnerabilities/spec_store_bypass <==
Vulnerable
==> /sys/devices/system/cpu/vulnerabilities/spectre_v1 <==
Mitigation: __user pointer sanitization
==> /sys/devices/system/cpu/vulnerabilities/spectre_v2 <==
Mitigation: Full generic retpoline, STIBP: disabled, RSB filling
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
The results of dd any different between 4.20 and 5.0?
Offline
The results of dd any different between 4.20 and 5.0?
Running this script:
#!/bin/bash
dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc status=progress
echo 3 > /proc/sys/vm/drop_caches
dd if=tempfile of=/dev/null bs=1M count=1024 status=progress
dd if=tempfile of=/dev/null bs=1M count=1024 status=progress
rm tempfile
The result somewhat formatted:
# Linux 4.20.11 # Linux 5.0.5
# Encrypted ext4, HDD
14,9356 s, 71,9 MB/s 17,92 s, 59,9 MB/s
11,1985 s, 95,9 MB/s 16,154 s, 66,5 MB/s
0,385833 s, 2,8 GB/s 0,354816 s, 3,0 GB/s
# Encrypted ext4, SSD
40,2004 s, 26,7 MB/s 37,4672 s, 28,7 MB/s
10,8309 s, 99,1 MB/s 10,2746 s, 105 MB/s
0,376284 s, 2,9 GB/s 0,390699 s, 2,7 GB/s
# Plain ext2, HDD
24,191 s, 44,4 MB/s 23,3788 s, 45,9 MB/s
16,7485 s, 64,1 MB/s 17,2888 s, 62,1 MB/s
0,436738 s, 2,5 GB/s 0,398474 s, 2,7 GB/s
The difference is IMO marginal compared to the huge lags I experience. It looks to me as if linux lagged *before* the reading of any file.
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
Well, I have good news, the upgrade to linux 5.0.7 has brought a breakthrough, everything is normal again. Thanks for all help guys, I hope this bug will disappear forever...
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline