the thread is from 2007... that's quite the necro ejmarkow
If you read carefully, the last comment on this thread, prior to mine, was on 2009-04-11 07:27:01. Not quite the 'necro'.
but just to clarify before this gets closed - vm.vfs_cache_pressure is an integer from 0 to 200 - closer to 0 favors inode/dentry cache (which is usually quite small) and 100 is default behavior
Incorrect. The integer limit to vm.vfs_cache_pressure isn't 200. It ranges from 0 to beyond. Here is an example from:
http://www.kernel.org/pub/linux/kernel/ … ning.patch
----------
Some people want the dentry and inode caches shrink harder, others want them
shrunk more reluctantly.
The patch adds /proc/sys/vm/vfs_cache_pressure, which tunes the vfs cache
versus pagecache scanning pressure.
- at vfs_cache_pressure=0 we don't shrink dcache and icache at all.
- at vfs_cache_pressure=100 there is no change in behaviour.
- at vfs_cache_pressure > 100 we reclaim dentries and inodes harder.
The number of megabytes of slab left after a slocate.cron on my 256MB test
box:
vfs_cache_pressure=100000 33480
vfs_cache_pressure=10000 61996
vfs_cache_pressure=1000 104056
vfs_cache_pressure=200 166340
vfs_cache_pressure=100 190200
vfs_cache_pressure=50 206168
Of course, this just left more directory and inode pagecache behind instead of
vfs cache. Interestingly, on this machine the entire slocate run fits into
pagecache, but not into VFS caches.
----------
Here is a quote from the documentation file /usr/src/linux-2.6.33.2/Documentation/sysctl/vm.txt :
----------
vfs_cache_pressure:
Controls the tendency of the kernel to reclaim the memory which is used for
caching of directory and inode objects.
At the default value of vfs_cache_pressure=100 the kernel will attempt to
reclaim dentries and inodes at a "fair" rate with respect to pagecache and
swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
to retain dentry and inode caches. When vfs_cache_pressure=0, the kernel will
never reclaim dentries and inodes due to memory pressure and this can easily
lead to out-of-memory conditions. Increasing vfs_cache_pressure beyond 100
causes the kernel to prefer to reclaim dentries and inodes.
----------
It states nothing about a 200 upper limit, only 'beyond 100'.
My file management hasn't slowed to a crawl at all. Everything is working quite fast. I'm happy with the parameters I am now using.
]]>Here is my system:
Linux Galicja 2.6.33.2-ARCHMOD #1 PREEMPT Fri Apr 2 13:44:32 CEST 2010 x86_64 Genuine Intel(R) CPU 575 @ 2.00GHz GenuineIntel GNU/Linux
I've just added the following to /etc/sysctl.conf:
# Prevent the kernel from swapping memory pages
vm.swappiness = 0
# Kernel will aggressively reclaim the memory which is used for caching of directory and inode objects, hence dentries and inodes
vm.vfs_cache_pressure = 1000
Result: Impact is positive. System is fast and responsive, nothing gets written out to swap, and everything is done with current physical RAM.
the thread is from 2007... that's quite the necro ejmarkow
but just to clarify before this gets closed - vm.vfs_cache_pressure is an integer from 0 to 200 - closer to 0 favors inode/dentry cache (which is usually quite small) and 100 is default behavior
inodes/dentries caching only takes up 200-300MB for me if EVERYTHING is cached (use "find / > dev/null" to get it all cached)
for a desktop you want those caches more than IO/page caches - otherwise your file management, etc. slows to a crawl because the inode caches get pushed out of ram by IO cache (moving 4GB to a USB drive, etc)
if you don't want to use swap you could just get rid of your swap file/partition (unless you use suspend to disk)
]]>Linux Galicja 2.6.33.2-ARCHMOD #1 PREEMPT Fri Apr 2 13:44:32 CEST 2010 x86_64 Genuine Intel(R) CPU 575 @ 2.00GHz GenuineIntel GNU/Linux
I've just added the following to /etc/sysctl.conf:
# Prevent the kernel from swapping memory pages
vm.swappiness = 0
# Kernel will aggressively reclaim the memory which is used for caching of directory and inode objects, hence dentries and inodes
vm.vfs_cache_pressure = 1000
Result: Impact is positive. System is fast and responsive, nothing gets written out to swap, and everything is done with current physical RAM.
]]>#System booster
vm.swappiness=1
vm.vfs_cache_pressure=50
lets see if it works
i hope it does
actually setting swappiness to 10 improved speed and responsiveness a lot!
having 1gb ram and using the default setting of vm.swappiness=60 and suspending a lot to disk, there were still 100-200mb in the swap partitin (and only 500mb in ram). opened, but less used apps like evince were still quite sluggish. now with vm.swappines=10 everything is kept in ram and all applications are directly available.
vlad
]]>vm.swappiness=value
% of swap allowed
higher value means that kernel will swap out often. default 60% so max swap dump is 60% of swap partition (important for suspend to disk)
so unless system swaps constantly, the above value will not change much.
VFS shrinkage tuning: This adds /proc/sys/vm/vfs_cache_pressure, which tunes the vfs cache versus pagecache scanning pressure. At vfs_cache_pressure=0 we don't shrink dcache and icache at all, at vfs_cache_pressure=100 there is no change in behaviour, at vfs_cache_pressure > 100 we reclaim dentries and inodes harder
so if there is something else, these values (as set) have very little impact on average system performance.
]]>Will however keep using them for a while and see. Probably learned a few thing anyway so thanks for the link
]]>If I may ask... what do these options do exactly?
Check the URL i have pasted, it will give you an idea.
]]>and placed following in my /etc/sysctl.conf:
#System booster
vm.swappiness=1
vm.vfs_cache_pressure=50
Is this correct, please try and let me know if you experienced any response improvement.
]]>