You are not logged in.
I have a self-written scientific program that is eating a lot of memory, and it is run on a computer with 128GB of memory. However /usr/bin/time reports
6524.25user 4804.81system 3:09:13elapsed 99%CPU (0avgtext+0avgdata 417392320maxresident)k
47480inputs+12224outputs (747major+1809500431minor)pagefaults 0swaps
According to this my program would have used more than 3 times the available memory, and that is not possible.
Does anyone know why the reports are strange? Or am I misunderstanding the entire concept of maximum resident memory?
Last edited by toffyrn (2012-04-15 20:31:37)
Offline
I was intrigued by this. Some Googling turned up this recent SO thread:
Offline
So I shoud simply divide by 4, giving a total of ~100GB which seems more reasonable!?
The machine is running Ubuntu server 10.04. Will try to run a smaller simulation on my home computer tomorrow and see if that bug is present here in Arch as well...
Offline
Ran for a smaller system on my Arch powered home computer, and I knew it was using close to the 24GB available. /usr/bin/time reported 21.6GB, so that bug seems to be nonexistant or fixed for us.
Last edited by toffyrn (2012-04-16 08:01:26)
Offline