You are not logged in.
Hi,
every time I copy a file which is bigger or similiar in size to my total RAM (4gb) I notice very low responsibility from firefox (which is totally unresponsive, can't switch tabs or scroll for 30-60s). Of course my free memory is very low (something like 50-100mb) and I notice some swap usage. AFAIK linux caches everthing that is being copied, but in case of such big files it seems unnecessary.
Is there a way to reduce max buffer size?
I know that buffering is good in general, but I get a feeling that firefox is giving up ram and he has to read everything again from disk which slows him down. I always have many tabs open, so often it has around 30% of memory.
I searched many times on how to reduce buffer sizes, but I've always found only articles with "buffering is always good and never an issue" attitude.
I would be very happy to hear any suggestrions,
cheers,
kajman
Offline
cgroups
You can launch the copy operation (better yet, the shell or you filemanager) in a cgroup and limit the ram it is allowed to use.
More Documentation to be found in: <kernel source>/Documentation/cgroups/memory.txt
http://git.kernel.org/?p=linux/kernel/g … cf;hb=HEAD
I'm sorry that i can't provide you with more information, since this is just an idea that i had floating in my head for some time now. Having 4GB of ram didn't make it necessary for me to implement it.
Offline
With 4 GB RAM you could probably disable swap altogether. If not, you might want to set swappiness to a low value (I like 0) and/or read that article: http://rudd-o.com/en/linux-and-free-sof … o-fix-that
Last edited by stqn (2011-10-11 16:05:52)
Offline
This seems a popular problem, going back years. The default Linux setup is bad for responsiveness, it seems.
Here's the summary of what I do:
Firstly, install a BFS-patched kernel, for a better kernel scheduler, and also so that the ionice and schedtool commands will work. Bonus points for switching to BFQ while you're at it - or stick with CFQ, which also supports ionice.
In /etc/fstab, use commit=60 rather than default of 5 seconds, and also noatime, e.g.:
UUID=73d55f23-fb9d-4a36-bb25-blahblah / ext4 defaults,noatime,nobarrier,commit=60 1 1
In /etc/sysctl.conf
# From http://rudd-o.com/en/linux-and-free-software/tales-from-responsivenessland-why-linux-feels-slow-and-how-to-fix-that
vm.swappiness=0
# https://lwn.net/Articles/572921/
vm.dirty_background_bytes=16777216
vm.dirty_bytes=50331648
In ~/.bashrc - see post, e.g.:
alias verynice="ionice -c3 nice -n 15"
In /etc/security/limits.d/ - see post. Read CK's excellent blog article, for info.
In your cp command, add the word verynice to the start, to stop the large batch copy from having the same priority as your UI.
Compile sqlite without fsync, to make e.g. firefox smoother.
Potentially use threadirqs to prioritize the interrupt-handling.
Edit: Updated vm.swappiness from 0 to 10, from CK's blog.
Edit2: Also see patch and e.g. nr_requests in thread.
Edit3: Using nice instead of schedtool - not sure whether schedtool can hog the CPU.
Edit4: Added threadirqs.
Edit5: Tweaked sysctl.conf settings.
Edit6: Added nobarrier option to mount, and sqlite's fsync.
Edit7: Removed swap comment - I do use a swapfile, these days, mainly because firefox needs so much virtual RAM to compile.
Last edited by brebs (2014-03-10 09:51:34)
Offline
Thank you all for the answers!
I'm sorry for replying so late, but I didn't subscribe to this topic (didn't know it does not happen automatically).
In the mean time I've came up with installing bfs-patched kernel myself and it almost got rid of the problem. I'll try to apply your ideas in close future, as they look interesting.
@brebs:
While I was searching for solution to this problem I've found some opinions that disabling swap will not lead to any performace gains and can make system unresponsive at all. Did you have any problems? I was thinking about reducing swapinness as much as possible, but leaving swap there just in case.
cheers,
kajman
Offline
Did you have any problems?
Nope.
Offline
@brebs:
While I was searching for solution to this problem I've found some opinions that disabling swap will not lead to any performace gains and can make system unresponsive at all. Did you have any problems? I was thinking about reducing swapinness as much as possible, but leaving swap there just in case.
I'd keep an eye on your actual memory usage before disabling swap altogether - I also had 4 GiB of RAM and a 4 GiB swap file, and depending on what I was doing I could easily be using 50-60% of that swap file.
Offline
Of course Linux works great without swap. Better than with swap, if you ask me. My RAM usage almost always stays below 1 GB BTW. The only time I filled my 4 GB was trying to solve a Project Euler problem the wrong way .
Edit: what can make the system unresponsive in my experience is ENABLING swap. When an application goes out of control and starts using swap memory... I'd rather have this app killed before the swap if full, because it can take quite some time...
Last edited by stqn (2011-10-18 09:51:46)
Offline
What's your swappines? Depending on the value you could be touching swap even with free memory.
Look at the Swap wiki's article for that.
https://wiki.archlinux.org/index.php/Swap#Swappiness
Last edited by ethail (2011-10-18 10:15:58)
Best Testing Repo Warning: [testing] means it can eat you hamster, catch fire and you should keep it away from children. And I'm serious here, it's not an April 1st joke.
Offline
If anyone is interested:
$ su
# mkdir /tmp/memgroup
# mount -t cgroup none /tmp/memgroup -o memory
# cd /tmp/memgroup
# mkdir 0
# cd 0
# echo 100M > memory.limit_in_bytes
# echo <pid from another shell> > tasks
> In another shell as an user
$ echo $$
$ rsync -av from_big to_big
Btw, I am not a big fan of setting swappiness to 0. Most of the time haveing a file cached is better than having a program cached. If firefox is accessed for example, and file caches have been flushed (due to swappines 0) firefox wont be responsive, because it has mmap'ed its sqlite database, its webcache, some libraries, ui elements, etc.. .
You actually dont want programs like rsync, locate, <various indexers> to pollute your cache with pages which wont ever be accessed again.
You can turn the cgroups trick around. Instead of limiting process like rsync, you can guarantee 3GB of mem to firefox and other programs.
Offline
reacting to an old post as there is a simpler solution.
you can avoid the linux file cache altogether by using direct IO. There is no flag in the cp command, but there is in dd.
so do
dd if=sourcefile of=destfile iflag=direct oflag=direct bs=1M
and none of it will ever enter the file cache, and therefore not push out anything.
Offline