You are not logged in.

#1 2013-02-26 17:37:52

Gullible Jones
Member
Registered: 2004-12-29
Posts: 4,863

LIFO I/O scheduling?

I have made a small modification to my custom kernel, in the source file for the noop I/O scheduler, noop-iosched.c - I replaced

        list_add_tail(&rq->queuelist, &nd->queue);

on line 45 with

        list_add(&rq->queuelist, &nd->queue);

thus turning the FIFO queue based scheduler into a LIFO stack based one.

Does it work? Well, I'm using it in an Ubuntu VM, and it hasn't crashed and burned yet. In fact it performs quite well. But VM performance is not really indicative of performance on bare hardware, so I will have to test it on a physical machine at some point...

But anyway, my hypothesis is as follows.

- The noop scheduler gives processes disk access mostly in the order requested (aside from request merging). So long-running processes starve everything else, and hang the system. This is bad.

- The cfq scheduler tries to be more fair. But this means that a long-running, I/O-intensive process will still take a chunk out of desktop responsiveness; it gets its fair share, after all.

- How about, instead of trying to prevent I/O starvation, we make sure that the right tasks get starved?

A stack, I thought, would do this by its nature. Since it's LIFO, new tasks would get their requests serviced immediately, while longer and less urgent tasks would get bumped down the line - effectively relegated to idle status until the new stuff finished.

This was my line of thinking anyway...

I can see a problem with it though - it might be possible for an "urgent" request to get buried in the stack by less urgent ones, under some conditions. I would hope that request merging mitigates this at least a bit... Like I said, I'll have to try this on bare hardware.

BTW, I realize what I probably want involves cgroups. But this idea was born of reading about I/O schedulers and thinking, "Good gods, there must be a simpler way to do this properly!" Inverting the noop scheduler's behavior may not be "proper" in any way, shape, or form, but it does have the advantage of being brain-dead simple.

P.S. As far as performance under real-world desktop loads with real schedulers goes, I've actually found deadline to be the best by far and away. It gets a bit freezy when you do tests with dd, but for sane multitasking it works much better than cfq.

Offline

#2 2013-02-26 18:24:04

brebs
Member
Registered: 2007-04-03
Posts: 3,742

Re: LIFO I/O scheduling?

make sure that the right tasks get starved?

That's impossible to guess. CK has a good blog (regarding CPU, but still applicable).

One can tweak the I/O priority with the ionice command - examples.

Offline

#3 2013-02-26 18:49:41

Gullible Jones
Member
Registered: 2004-12-29
Posts: 4,863

Re: LIFO I/O scheduling?

Thanks brebs, interesting read. I guess stack-based qualifies as a heuristic of sorts?

I tend to think that making desktops less laggy under sane conditions is worthwhile, even if it occasionally results in worse performance; my example (again) is the deadline scheduler, which will lag horribly when spammed with requests from dd, but otherwise has worked better for me than cfq. You can't have good performance 100% of the time, IMO you might as well optimize for normal use patterns. It's not like desktops are used for life-critical realtime tasks (AFAIK).

OTOH, I'm not a kernel dev. smile

Anyway, I shouldn't even assume that LIFO will work at all on physical hardware, at this point.

Edit: I should also mention that I have used BFS kernels, to mixed effect. On some hardware BFS seems to perform much better, on other hardware it seems on par with, or worse than, CFS.

Could be my imagination though; I've heard that I/O and memory management hugely eclipse CPU scheduling these days.

Last edited by Gullible Jones (2013-02-26 19:24:26)

Offline

#4 2013-02-27 01:44:49

Gullible Jones
Member
Registered: 2004-12-29
Posts: 4,863

Re: LIFO I/O scheduling?

Well, it works on my netbook... It seems to perform slightly worse under most conditions than cfq or deadline. Unlike the queue version of noop, it doesn't hard lock when running dd, but that's about all that can be said for it.

Conclusion: a scheduler based on a brain-dead simple stack works well enough to be usable. But that is not saying much at all.

Offline

#5 2013-02-27 16:57:08

Gullible Jones
Member
Registered: 2004-12-29
Posts: 4,863

Re: LIFO I/O scheduling?

An addendum: I'm noticing that on recent 3.x kernels, there is very little difference between deadline and cfq behavior. On Ubuntu, update-apt-xapian-index uses twice as much CPU with cfq as it does with deadline, but otherwise everything seems the same lately.

Last edited by Gullible Jones (2013-02-27 16:59:29)

Offline

Board footer

Powered by FluxBB