You are not logged in.
Pages: 1
When you get the chance, run
$ ionice -c3 dd if=/dev/zero of=dumpfile bs=4096 count=1000000
And see what performance impact it has on your desktop. Note that -c3 indicates the process will get "idle" IO priority, which according to the ionice man page means this:
Idle A program running with idle I/O priority will only get disk time
when no other program has asked for disk I/O for a defined grace
period. The impact of an idle I/O process on normal system
activity should be zero. This scheduling class does not take a
priority argument. Presently, this scheduling class is permit‐
ted for an ordinary user (since kernel 2.6.25).
On my laptop, the dd process brings my system to a crawl even when ioniced to idle. Does that happen to anyone else?
Offline
Definitely slower, but not to a crawl.
load avg to 2.5
Offline
I didn't really see any performance hit, but the operation only took 30 seconds so it's hard to tell.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
Definitely slower, but not to a crawl.
load avg to 2.5
CPU usage around 10-20% here, but graphical applications were failing to refresh and such.
I didn't really see any performance hit, but the operation only took 30 seconds so it's hard to tell.
I assume you have an SSD? My laptop has a bog-standard SATA hard drive.
Last edited by Gullible Jones (2012-05-21 15:33:53)
Offline
Not bog-standard (WD RE3 "enterprise" drive), but it's still a hard drive.
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
Ah, oh well...
I'll note BTW that reducing /sys/block/sdX/queue/nr_requests, to 4-16 (the default is 128), seems to be somewhat helpful for performance under load. (I assume this is because desktops don't routinely deal with more than 4 disk-intensive tasks at once.) But as far as I can tell, there's never any difference between dd with and without ionice.
Offline
Okay, it looks to me like the issue is that the kernel is caching writes in swap space. Why it would do that when RAM is available I have no idea, let alone why it would do that when swappiness is set to 0. Seems to me the smart thing to do would be to flush the write cache immediately when RAM runs out, no? That would reduce throughput, but it would be a heck of a lot better than caching writes in swap, and thereby bringing throughput down to nothing and latency through the roof.
Offline
What are we trying to accomplish here? Are we converging on that ? ....
Nothing is too wonderful to be true, if it be consistent with the laws of nature -- Michael Faraday
Sometimes it is the people no one can imagine anything of who do the things no one can imagine. -- Alan Turing
---
How to Ask Questions the Smart Way
Offline
A Linux desktop that won't stall out when copying large files around.
Offline
It didn't seem to cause any problems on my older computer with 512mb of RAM and twice that of swap. Maybe an issue with 64bit linux?
Offline
no problem and no noting here
x64 celeron 430 / 1.5GRam 512MSwap
the only is new actions take more time than alredy executing actions
load average: 2,40, 1,66, 1,21
Well, I suppose that this is somekind of signature, no?
Offline
This might be a situation where the GNU time, from extra/time, is more useful than bash's built-in.
$ /usr/bin/time -v ionice -c3 dd if=/dev/zero of=dumpfile bs=4096 count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 38.545 s, 106 MB/s
Command being timed: "ionice -c3 dd if=/dev/zero of=dumpfile bs=4096 count=1000000"
User time (seconds): 0.14
System time (seconds): 8.92
Percent of CPU this job got: 22%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:39.51
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 884
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 2
Minor (reclaiming a frame) page faults: 453
Voluntary context switches: 5947
Involuntary context switches: 1358
Swaps: 0
File system inputs: 64
File system outputs: 8000000
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Edit: Results without 'ionice -c3' are nearly identical:
$ /usr/bin/time -v dd if=/dev/zero of=dumpfile bs=4096 count=1000000
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 37.7916 s, 108 MB/s
Command being timed: "dd if=/dev/zero of=dumpfile bs=4096 count=1000000"
User time (seconds): 0.14
System time (seconds): 9.06
Percent of CPU this job got: 23%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:38.83
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 884
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 2
Minor (reclaiming a frame) page faults: 272
Voluntary context switches: 7475
Involuntary context switches: 4398
Swaps: 0
File system inputs: 56
File system outputs: 8000000
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Last edited by thisoldman (2012-05-21 23:42:29)
Offline
A Linux desktop that won't stall out when copying large files around.
I'd be curious to see how this test fares against a bfs-enabled kernel.
Offline
I find the description in the manpage odd though. Saying the impact should be "zero" on normal activity.
There are many programs which use IO too (imo) often "as normal activity". (eg.: I noticed weechat being one of them. With high IO load, weechat tends to hang a lot... I find this rather ridiculous... like it's trying to fsync() every single log-line... maybe there's a setting I can change there...)
And even when you only get "idle" IO - the process of writing, once you actually get IO time, takes some time too, during which no other program can access the disk.
Thinking more about that I wonder if there's a "nice" way to have a tmpfs synced to an actual directory regularly, which I could use for weechat's logging for instance... or for ~/.mozilla, or ~/.thumbnails or whatever...
Last edited by Blµb (2012-05-26 08:30:21)
You know you're paranoid when you start thinking random letters while typing a password.
A good post about vim
Python has no multithreading.
Offline
I didn't really see any performance hit, but the operation only took 30 seconds so it's hard to tell.
Offline
Thinking more about that I wonder if there's a "nice" way to have a tmpfs synced to an actual directory regularly, which I could use for weechat's logging for instance... or for ~/.mozilla, or ~/.thumbnails or whatever...
Something like anything-sync-daemon?
But whether the Constitution really be one thing, or another, this much is certain - that it has either authorized such a government as we have had, or has been powerless to prevent it. In either case, it is unfit to exist.
-Lysander Spooner
Offline
Pages: 1