You are not logged in.
Hi all,
I have a slightly specific question. I'm using my computer for numerical computations, which involve solving large (and, yes, I mean large) linear programs. Now, as I play with the code, and try out different setups or solvers, it happens that the problem just gets 'too big' from time to time, i.e. suddenly the program I run starts eating up all my RAM, and after it's done, it proceeds to do the same with all my swap space.
Now, I would like to prevent this. Ideally, I'd like to be able to just open gnome's system monitor, and kill the program. The problem is - once you've run out of normal RAM and operate on swap, doing pretty much anything takes ages, so I essentially have the option of waiting quite a while until the system monitor starts, or restarting (which isn't awesome either).
I was wondering if any of you can give me some hints as to how to solve this. I looked into the 'nice' documentation, but I'm not sure how exactly nice handles memory. What I would need specifically would be some way to prevent the program from using up all the RAM (so, p.e. let it use all the free ram besidey a few MB so I can still keep the system responsive, and then proceed to using swap).
Thanks for the help!
Best,
Martin
Last edited by martinsz (2009-09-15 00:39:45)
Offline
You should look at doing something with "ulimit" before starting the app.
Offline
Have you considered a virtual machine? Virtualbox is one example that allows you to set RAM limits for a guest OS. I haven't tried Virtualbox with Arch, but on my wife's Opensuse machine, it runs both Ubuntu and XP as guests.
Offline
The other option (which is applicable only if you have more than 4GB of RAM) is to give yourself a small performance drop and install a PAE kernel. A PAE kernel is a 32-bit kernel. Normally a 32-bit OS will only support ~3.7GB of RAM, but with PAE enabled, linux will support 64GB of RAM on a 32-bit system. The most major upside of this is that a single application can only use 4GB of RAM, so no application can take over and eat all your RAM.
Offline
Also you could experiment with the swappiness option:
http://www.linuxvox.com/linux-articles/ … swappiness
zʇıɹɟʇıɹʞsuɐs AUR || Cycling in Budapest with a helmet camera || Revised log levels proposal: "FYI" "WTF" and "OMG" (John Barnette)
Offline
Have you looked into pam? Pam has quite a bit of options to limit various things. Not quite sure how to set it up for this though.
Knute
Offline
Hi!
Thanks for all your input!
I don't think I can use a virual machine (don't have the space to install another system + one of my solvers has a node-locked license..). The only thing I found in the pam documentation was setting the maximal amount of RAM I can use per session, so essentially it's the same as ulimit.
Setting my memory limit to whatever physical RAM I have available would work, in the sense that whenever the code exceeds the RAM, it will produce an error and stop.
I think I'll try setting my swappiness to 0. According to the documentation, this should prevent current applications (read: system-monitor) from being paged out to swap, which should mean that I can still access it quickly even though my program tanks.
Another, fairly simple solution I found is actually (and maybe I'm doing a bit of overkill here) giving the offending program a very high nice value (i.e. low priority), and system-monitor a very high one. It doesn't prevent system-monitor from being paged out to swap, I think, but at least it doesn't need to jostle for position in the main RAM once I want to use it, so the loading takes place much faster.
Thanks for the help!
Martin
Offline
Have you considered a virtual machine? Virtualbox is one example that allows you to set RAM limits for a guest OS. I haven't tried Virtualbox with Arch, but on my wife's Opensuse machine, it runs both Ubuntu and XP as guests.
Overkill for a task like this, a chroot would be better ;)
Offline
I like the chroot idea.
Offline