You are not logged in.
Pages: 1
Hi, I noticed that this happens on my system:
# iostat -mxd 1
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdc 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
sdd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
md0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2779.78 0.00 0.00 0.00 0.00 100.10
dm-2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
over time, it gets worse, avgqu-sz goes into millions even though the system is basically idle.
md0 is a raid5 device comprised of 3 physical hdds.
# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sdd[3] sdc[1] sdb[0]
5860270080 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/22 pages [0KB], 65536KB chunk
unused devices: <none>
On top of the raid5, there is LUKS encrypted volume.
I don't see any real performance issue, just wondering why the average queue size is high.
I did a reboot and before unlocking the array, the avgqu-sz was steadily 19 and it showed 100% utilization.
Can somebody please explain?
Thanks,
Klement
Last edited by klement (2015-05-09 05:38:12)
Offline
Please use code tags when pasting to the boards https://wiki.archlinux.org/index.php/Fo … s_and_code
Offline
Pages: 1