You are not logged in.

#1 2018-06-28 21:24:59

oz
Member
Registered: 2004-05-20
Posts: 102

qemu/kvm Linux guest forces I/O scheduler to blk-mq?

I'm booting an Arch Linux guest in qemu/kvm and it seems to somehow force the default blk-mq scheduling system to be enabled (none, kyber, bfq, mq-deadline) no matter what I do.

Normally you would need to add "scsi_mod.use_blk_mq=1" to the kernel parameters to enable this subsystem otherwise the default is the "single" schedulers (noop, deadline, cfq).

How can I enable the normal noop/cfq schedulers in a guest? The host is using cfq and I want to test noop, deadline, and cfq in the guest. I've tried setting scsi_mod.use_blk_mq=0 in the guest but that doesn't work, only the mq schedulers are available.

Anyone know why this is even happening? What is forcing this?

Thanks

Offline

#2 2018-06-28 23:03:31

qinohe
Member
From: Netherlands
Registered: 2012-06-20
Posts: 1,494

Re: qemu/kvm Linux guest forces I/O scheduler to blk-mq?

Hi oz, since it is not clear to me if you actually did this but did you add ' elevator=noop' to the kernel line from the guest?

Offline

#3 2018-06-28 23:14:43

oz
Member
Registered: 2004-05-20
Posts: 102

Re: qemu/kvm Linux guest forces I/O scheduler to blk-mq?

The elevator setting only changes the sub-scheduler. So you can change between noop, cfq, deadline if using the single-queue scheduler system. Or if using the multi-queue system then you can change between none, kyber, bfq, and mq-deadline. So that doesn't help here because I'm trying to switch the subsystem itself between single and multi. Right now it's stuck in the mq scheduler so there is no way to select noop, cfq, or the normal deadline.

Offline

#4 2018-06-29 00:18:42

oz
Member
Registered: 2004-05-20
Posts: 102

Re: qemu/kvm Linux guest forces I/O scheduler to blk-mq?

Interesting...

If I use a drive image by specifying "-hda foo" then I get the single-queue scheduler by default and I can set scsi_mod.use_blk_mq=1 if I want the multi-queue system.

If I use the virtio-scsi then it uses the multi-queue scheduler system by default and it can't be changed to the single-queue system. WTF... Why? I don't get it. The kernel is changing its behavior based on the qemu/kvm settings. There must be some way to do it because one of my remote KVM servers is using the scsi HBA and the single-queue io scheduler in the kernel.

Some qemu settings I'm not doing correctly?

This works and can select any scheduler system:

qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -hda test.raw -m 2G -cdrom archlinux.iso

This is forced to use the multi-queue system:

qemu-system-x86_64 -machine type=pc,accel=kvm -cpu host -m 2G -device virtio-scsi-pci,id=scsi0 -drive id=hdroot,file=test.raw,if=none,media=disk,format=raw -device scsi-hd,drive=hdroot -cdrom archlinux.iso

Edit:
This seems to be an Arch Linux specific thing because I can boot Ubuntu using the virtio-scsi system and the single-queue io scheduler is the default. So what is wrong with the default Arch kernel? Is it possibly a change between kernel 4.15 (Ubuntu) and 4.16 (Arch)?

Also, during boot the Arch kernel hangs for like 15 seconds before finally doing the scsi initialization. The Ubuntu goes right through it with no delay. Definitely seems to be something weird with the Arch kernel.

Last edited by oz (2018-06-29 01:21:28)

Offline

Board footer

Powered by FluxBB