You are not logged in.
Hi,
having read the Wiki https://wiki.archlinux.org/index.php/So … _Scheduler , my attention got caught by the current I/O schedulers for my disks:
SSD (two partitiotions: / and /boot, both ext4):
# cat /sys/block/sda/queue/scheduler
noop deadline [cfq]
2 HDDs (part of ZFS mirror pool, partitioned by ZFS itself, mounted automatically on import by ZFS):
# cat /sys/block/sdb/queue/scheduler
[noop] deadline cfq
I am about to create my own udev.rules to set noop and cfq for SSD and HDD, respectively. I just wonder, why all of them are not [cfq], which should be the default?
P.S. I grepped all files in /etc for "noop" and "cfq" to make sure it is not some of my setting forgotten in time. And found nothing.
Last edited by MilanKnizek (2015-04-22 08:32:59)
--
Milan Knizek
http://knizek.net
Offline
The kernel config does I believe.
% zgrep -i iosched /proc/config.gz
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
CONFIG_CFQ_GROUP_IOSCHED=y
CONFIG_DEFAULT_IOSCHED="cfq"
Perhaps zpools default to noop somehow.
Last edited by graysky (2015-04-22 07:40:35)
CPU-optimized Linux-ck packages @ Repo-ck • AUR packages • Zsh and other configs
Offline
Thanks for the quick response and suggestion. Quick googling revealed it is ZFS that sets "noop" scheduler as long as its pools are occupying the whole disk device.
--
Milan Knizek
http://knizek.net
Offline