Memory and Paging

Jason Crain jason at bluetree.ath.cx
Wed Feb 11 20:54:14 UTC 2009


On Wed, February 11, 2009 2:26 pm, John Hubbard wrote:
> My computer has some memory.  When I need more memory than the computer
> writes some of the stuff in memory to the hard drive to free up the
> memory.  This is troublesome because the hard drive is very slow.  While
> moving stuff around the computer often slows way down since there is no
> free memory.  To fix things I often need to kill the run away task.
> (Usually some code that I have written that is misbehaving or behaving
> properly, but using more memory than I expected.)  When in this state,
> it often takes a very long time to ssh into the machine to kill the task
> in question.  I am trying to figure out a solution to this problem.  It
> seems like I would need to do a few things.
>
> 1) Have a process running that 'owns' a certain amount of memory (enough
> to run bash/top/kill/pidof and a few other small programs) and keeps
> this memory from being paged out.
> 2) Enough memory set aside for SSHD to allow a new connection.
> 3) Some way to ssh in and access that memory owning process or request
> memory from that process.
>
> Is there any way to do these things?  Does someone else have a different
> approach that accomplishes the same thing?  How much memory am I talking
> about?  Would 5MB be enough?  Any other thoughts or comments?

You can use ulimit, a bash builtin command that limits the
memory/filesize/etc. of the shell and any process run in the shell.  It is
documented in the bash man page.




More information about the ubuntu-users mailing list