You are not logged in.
First, I apologize for not reading the other's answers. This issue was solved in recent kernels. But if that patch still didn't hit the Arch you may simply add
elevator=deadline
to kernel options in GRUB and reboot. This setting switches the policy of how kernel provides resources to applications. Other elevator settings can be cfq and noop. The first is default, the last can lead to full crash if single application loops indefinitely, but may although solve your problem.
This may help too
Offline
First, I apologize for not reading the other's answers. This issue was solved in recent kernels. But if that patch still didn't hit the Arch you may simply add
elevator=deadline
to kernel options in GRUB and reboot. This setting switches the policy of how kernel provides resources to applications. Other elevator settings can be cfq and noop. The first is default, the last can lead to full crash if single application loops indefinitely, but may although solve your problem.
This may help too
Thanks for the news :-)
Don't write 'recent kernels' as a month from now nobody is going to hunt what you might have meant by that phrase - which kernel version do you mean?
Offline
First, I apologize for not reading the other's answers. This issue was solved in recent kernels. But if that patch still didn't hit the Arch you may simply add
elevator=deadline
to kernel options in GRUB and reboot. This setting switches the policy of how kernel provides resources to applications. Other elevator settings can be cfq and noop. The first is default, the last can lead to full crash if single application loops indefinitely, but may although solve your problem.
This may help too
Not working at all. See this thread, too.
Offline
Not working at all means kernel does not accept the argument (in which case you will have to configure the kernel itself, the option is in first menu) or it just didn't help?
Offline
It just does not help at all. Changing the IO-Scheduler was among the first thing I tried. But thanks anyway
EDIT: I tried using linux-vanilla (from AUR) today. It did not work. Just for the record.
Last edited by akurei (2011-09-17 03:04:42)
Offline
i solved this crap by just not having a swap partition and bam never a slowdown again no matter how much files i move or wich IO shed i use
Offline
i solved this crap by just not having a swap partition and bam never a slowdown again no matter how much files i move or wich IO shed i use
Than, my two cents, is that it was a partition alignment problem...
Offline
I also had hangs/freezes/slowdowns when copying (especially via USB), see here: https://bbs.archlinux.org/viewtopic.php?pid=988951
Downgrading the kernel to kernel26-lts entirely solved the slowdown problem for me.
Arch - makes me feel right at /home
Offline
gulafaran wrote:i solved this crap by just not having a swap partition and bam never a slowdown again no matter how much files i move or wich IO shed i use
Than, my two cents, is that it was a partition alignment problem...
I was copying some files over usb and i've had a temporary freeze two times.
Then i disabled swap and the transfer went fine(!)
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
Than, my two cents, is that it was a partition alignment problem...
So what do you mean by that and how to check if partitions are well aligned?
Offline
My assumption could have been true only if you had changed your partitions when the problem "got solved" (eg: if you deleted the swap partition and not just disabled it, as i understood from gulafaran's post).
Partitions alignment it's tricky subject these days... I've never heard a final word about it. But, for example, GParted default is to align you partitions to Mebibyte (optionally to cylinders).
Offline
After trying almost all available kernels and checking everything twice, I finally found a solution that worked wonders for me:
Run as root:
echo 1 > /proc/sys/vm/dirty_ratio && echo "echo 1 > /proc/sys/vm/dirty_ratio" >> /etc/rc.local
Offline
After trying almost all available kernels and checking everything twice, I finally found a solution that worked wonders for me:
Run as root:
echo 1 > /proc/sys/vm/dirty_ratio && echo "echo 1 > /proc/sys/vm/dirty_ratio" >> /etc/rc.local
Hmm.. I'm not sure if I would wanna set the dirty_ratio to 1! Did you try echoing 1 to dirty_background_ratio? That makes more sense.. Provided it works for the issue at hand
Offline
Ok, setting dirty_ratio to 5, and dirty_background_ratio to 1 seems to have helped a lot!
Oh, and the correct way to set it is using sysctl.conf
Offline
Oh, and the correct way to set it is using sysctl.conf
Ah, okay. Thank you for the hint!
PS:
Kernel virtual memory management
In the latest 2.6 kernels it seems that a few settings have changed with regard to how the virtual memory management is performed. Let's have a quick look over a few of them.
Dirty pages cleanupThere are two import settings which control the kernel behaviour with regard to dirty pages in memory. They are:
vm.dirty_background_ratio
vm.dirty_ratioThe first of the two (vm.dirty_background_ratio) defines the percentage of memory that can become dirty before a background flushing of the pages to disk starts. Until this percentage is reached no pages are flushed to disk. However when the flushing starts, then it's done in the background without disrupting any of the running processes in the foreground.
Now the second of the two parameters (vm.dirty_ratio) defines the percentage of memory which can be occupied by dirty pages before a forced flush starts. If the percentage of dirty pages reaches this number, then all processes become synchronous, they are not allowed to continue until the io operation they have requested is actually performed and the data is on disk. In case of high performance I/O machines, this causes a problem as the data caching is cut away and all of the processes doing I/O (the important ones in dCache pool) become blocked to wait for io. This will cause a big number of hanging processes, which leads to high load, which leads to unstable system and crappy performance.
Now the default values in Scientific Linux 4 for these settings with the default 2.6.9-cern{smp} are background ratio 10% and synchronous ratio 40%. However with the 2.6.20+ kernels, the default values are respectably 5 and 10%. Now, it is not hard to reach that 10% level and block your system, this is exactly what I faced when trying to understand why my systems were performing poorly and being under high load while doing almost nothing. I finally managed to find a few parameters to watch, which showed me what the system was doing. The two values to monitor are from /proc/vmstat file and are:
$ grep -A 1 dirty /proc/vmstat
nr_dirty 30931
nr_writeback 0If you monitor the values in your /proc/vmstat file you will notice that before the system reaches the vm.dirty_ratio barrier the number of dirty is a lot higher than that of writeback, usually writeback is close to 0 or occasionally flicks higher and then calms down again. Now if you do reach the vm.dirty_ratio barrier, then you will see the nr_writeback start to climb fast and become higher than ever before without dropping back. At least it will not drop back easily if the dirty_ratio is set to a too small number.
I personally use in my servers vm.dirty_background_ratio = 3 and vm.dirty_ratio = 40. You can set these variables by appending at the end of your /etc/sysctl.conf file:
$ grep dirty /etc/sysctl.conf
vm.dirty_background_ratio = 3
vm.dirty_ratio = 40and then executing:
$ sysctl -p
to see your current settings for dirty ratio, do the following:
$ sysctl -a | grep dirty
PS! the original vm.dirty_background_ratio for me was 15%, but after an e-mail from Stijn De Weirdt where he explained that it is not a good idea to run too high level of dirty memory before starting the disk writes and hence he recommended around 3-5% to quickly start the flushing to disk as the underlying hardware should be able to handle that. In addition what I did not mention is that if you grep for "dirty" in the sysctl -a output you will see a few more parameters which for example force the flushing of pages over than X seconds etc. As I didn't tune any of these I decided to just leave them be and not describe them here, you can investigate on your own their effects.
Last edited by akurei (2011-10-04 20:29:21)
Offline
Laptop Mode Tools sets vm.dirty_background_ratio and vm.dirty_ratio in /etc/laptop-mode/laptop-mode.conf. It will overwrite any values set in /etc/sysctl.conf.
Last edited by Markus00000 (2011-10-10 12:32:28)
Offline
Does this still happens to you with 3.0.6?
It seems to me that the issue is fixed.
Help me to improve ssh-rdp !
Retroarch User? Try my koko-aio shader !
Offline
Yes, it happens for example on a laptop that is just a few months old and quite fast. Heavy disk input/output (extracting large archives or decoding/encoding) almost halts the whole system. I tried various values for:
vm.dirty_background_ratio
vm.dirty_ratio
vm.swappiness
vm.vfs_cache_pressure
And also tried the deadline scheduler.
No success. (At least no noticeable success. I have no idea if "responsiveness" can easily be measured objectively.)
Monitoring system stats during such an event reveals that lots of RAM is free. The CPU shows little load but lots of IO events to wait for (forgive me if that is not correctly worded). The disk is reading or writing at near full-speed. Swap space is unused. Just :-(.
Last edited by Markus00000 (2011-10-12 08:11:41)
Offline
Has anyone tried installing linux-ck and switching to the BFQ I/O scheduler?
I just did that and it seems to greatly improve responsiveness while copying (or heavy I/O in general).
Last edited by Markus00000 (2011-10-14 17:35:28)
Offline
I tried -ck and BFQ, yes. But it did not help at all.
Offline
I use it and it works very well.
Best regards!
Offline
I tried -ck and BFQ, yes. But it did not help at all.
Just in case you missed it: BFQ is not enabled by default.
Offline
I always run make xconfig and check. So I had tried BFQ
I solved the problem with dirty_ratio though.
Last edited by akurei (2011-10-14 19:39:26)
Offline