You are not logged in.

#1 2019-02-02 18:30:54

linduxed
Member
Registered: 2008-10-12
Posts: 64
Website

[SOLVED] Heavy I/O leads to guaranteed long system lock-up

I'm experiencing (what I think is) an extreme version of the classical issue of system slow down that happens when one, for instance, copies a ton of data from a USB connected hard drive.

Some details about system drive to start off with:

  • SDD has a small EFI partition for booting. Later gets mounted on /boot.

  • Rest of space on SDD dm-crypt encrypted.

  • Decrypted space (mounted as "/dev/mapper/cryptroot") is partitioned as btrfs.

  • btrfs has two subvolumes, one for / and one for /home.

  • Based on when I showed smartctl data to some people in #archlinux, the drive is in OK health.

  • I use the linux-zen kernel.

  • "cat /sys/block/sdc/queue/scheduler" gives "mq-deadline kyber [bfq] none"

Here's the problem:

When some application decides to do some heavier I/O, the system will become extremely unresponsive. Primary symptoms:

  • Desktop manager (I use i3) allows mouse movement, but input is buffered for anything between 30 seconds to multiple minutes. I can basically not interact with the system.

  • My tray clock still functions.

  • My tray load average applet keeps going up, slowly but steadily for all three values (quickly reaching values that exceed the amount of cores I have).

  • If I have my desktop with htop and atop up, I can see that none of the CPUs are loaded and RAM is fine.

  • atop indicates that there's only one part of the system that exhibits high load: the /dev/sdc drive.

Here are some activities that lock up the system, without fail:

  • I close down Steam and it needs to save some data. Steam does not do this every time, but when it does: lockup.

  • SystemD initiates its scheduled man-page DB update at midnight.

  • Thunderbird needs to re-download a large folder or a whole account.

  • Killing a large Firefox session seems to result in the system writing some kind of core dump to disk.

  • I import an 800 MB large music file into Audacity.

The last one in particular I have recently used as a surefire way to trigger a system lockup.

In atop, the following values are what I can expect from my system without high I/O.

LVM |     cryptroot |  busy      0%  |  read       0 |  write    146  |  MBw/s    0.9 |  avio 0.25 ms  |
DSK |           sdc |  busy      0%  |  read       0 |  write    105  |  MBw/s    0.9 |  avio 0.31 ms  |

Let's run the Audacity music file import and see the values for that. This resembles a high load situation (imagine that all of the following is written in red):

LVM |     cryptroot |  busy    101%  |  read     258 |  write       0 |  MBw/s    0.0 |  avio 38.8 ms  |
DSK |           sdc |  busy    100%  |  read     254 |  write    1835 |  MBw/s   93.2 |  avio 4.77 ms  |
LVM |     cryptroot |  busy    100%  |  read       1 |  write       0 |  MBw/s    0.0 |  avio 10007 ms  |
DSK |           sdc |  busy    100%  |  read       1 |  write    1929 |  MBw/s   98.0 |  avio 5.16 ms  |
LVM |     cryptroot |  busy    100%  |  read     202 |  write    4920 |  MBw/s  249.6 |  avio 1.94 ms  |
DSK |           sdc |  busy     99%  |  read     199 |  write    2242 |  MBw/s  113.6 |  avio 4.05 ms  |

It should be mentioned that the "avio" value for the "cryptroot" row occasionally spiked up to 450ms, 10007ms, and 600ms. When my system locks up and I manage to get the desktop up with all atop, it's not uncommon for the "avio" to be three or four digit millisecond values.

For reference, the Audacity import is probably the least problematic of these high I/O activities, as I can still most of the time switch desktops, which is often impossible for the other scenarios outlined above. I suspect that Audacity only has some spikes of high volume writing, while the other outlined tasks have a more sustained high I/O.

What I would like to figure out is whether this is:

  1. Is this normal?

  2. Can I change something so that I can use my system in a responsive fashion, despite high I/O?

As a side note: when I run an application called leela-zero which puts high strain on my GPU (called from an application called Sabaki), I experience a general system slowdown (outside of the GPU computations). Because of slightly different characteristics  (slow down, instead of lock up), I'm going to assume that this is a separate problem, but I'm mentioning it in case this might turn out to be related.

Last edited by linduxed (2019-02-17 08:38:06)

Offline

#2 2019-02-02 19:41:14

seth
Member
Registered: 2012-09-03
Posts: 50,012

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

Try passing "scsi_mod.use_blk_mq=0" to the kernel command line.
Also the vanilla and the lts kernels.

Do you have a swap partition on the LVM?

Offline

#3 2019-02-02 19:52:04

loqs
Member
Registered: 2014-03-06
Posts: 17,197

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

seth wrote:

Try passing "scsi_mod.use_blk_mq=0" to the kernel command line.

Just to note this is scheduled to become a noop with 5.0.

Offline

#4 2019-02-02 19:53:44

seth
Member
Registered: 2012-09-03
Posts: 50,012

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

Which seems rather unfortunate. And a bit hasty.

Offline

#5 2019-02-02 20:05:21

linduxed
Member
Registered: 2008-10-12
Posts: 64
Website

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

I'll try the vanilla and LTS kernels, in that case, to see if it makes any difference. I'll skip the kernel flag, since it'll be going away soon anyway.

seth wrote:

Do you have a swap partition on the LVM?

I don't use LVM, it's probably atop confusing either btrfs or dm-crypt with LVM. As for swap, I don't have any.

Offline

#6 2019-02-03 08:12:51

stronnag
Member
Registered: 2011-01-25
Posts: 60

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

set the block scheduler to 'none'. Mitigates the problem on RAID1 here.

Offline

#7 2019-02-03 08:38:21

seth
Member
Registered: 2012-09-03
Posts: 50,012

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

You shall try the kernel flag to determine the cause, not as permanent solution.
As loqs pointed out, this will soon become a noop, so if there're more pending issues w/ the multi-queue schedulers, we better figure that *now*.

Offline

#8 2019-02-04 07:31:45

phw
Member
Registered: 2013-05-27
Posts: 318

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

I actually would expand on the answers above and say try different schedulers and see how this changes the behavior of your system under heavy IO. I have experienced pretty different results on different system setups. Currently using mq-deadline on my SSD here, and it seems to give the best responsiveness during heavy IO load for me, while e.g. "none" causes frequent issues with processes waiting for IO.

Offline

#9 2019-02-09 15:40:16

mediaserf
Member
Registered: 2010-11-29
Posts: 11

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

I have had a similar issue and have been using this patch to mitigate: https://marc.info/?l=linux-kernel&m=154883900500866

The patch is in the tree for the 5.0 kernel release.

Offline

#10 2019-02-09 16:08:36

loqs
Member
Registered: 2014-03-06
Posts: 17,197

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

Unfortunately https://git.kernel.org/pub/scm/linux/ke … e7b2f1cbce was not tagged for stable.

Offline

#11 2019-02-17 08:37:35

linduxed
Member
Registered: 2008-10-12
Posts: 64
Website

Re: [SOLVED] Heavy I/O leads to guaranteed long system lock-up

After discussions in the #archlinux IRC channel, I opted to try out running the following the following sysctl settings, by placing the following text in "/etc/sysctl.d/40-dirty.conf":

vm.dirty_background_bytes = 52428800
vm.dirty_bytes = 314572800

Previous default settings were as follows, obtained by printing them out with "sysctl -a | grep dirty":

vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 20
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 50
vm.dirty_writeback_centisecs = 500
vm.dirtytime_expire_seconds = 43200

With the new settings I can still tell when the system is under heavy I/O load, but the symptoms that I described earlier in the thread last for a much shorter time.

For the time being, I consider this an acceptable solution.

Last edited by linduxed (2019-05-30 09:08:41)

Offline

Board footer

Powered by FluxBB