Re: Out of memory kernel death

Tim Hollebeek (tim@franck.Princeton.EDU)
Thu, 8 May 1997 13:09:22 -0400 (EDT)

bert hubert writes ...
> On Wed, 7 May 1997, Theodore Y. Ts'o wrote:
> > looks through the processes and kills off the largest one.
> > This should be doable in Linux too...
> >
> > The problem is that the largest process is often the X server. Often
> > this will indeed free a lot of memory, but it's not necessarily the best
> > thing to do...
> This is getting uglier & uglier, but let's be pragmatic and make it kill
> the largest process NOT called "X".

Not good enough. "emacs" will go next, and I want that to survive too.

Part of the problem is that Linux tries to page things weighted based
on the RSS, which under normal conditions is a good heuristic.
However, if one process is producing new pages much faster than the
rest of the processes, this means that the *other* processes are
preferentially paged out, since the same pool of pages gets checked
more often. This usually brings the system to its knees long before
the system actually runs out of memory. In theory, Linux kills the
process that fails to allocate a page it needs; in my experience the
machine grinds to a halt long before this actually happens, which is
quite annoying.

Tim Hollebeek | Disclaimer :=> Everything above is a true statement,
Electron Psychologist | for sufficiently false values of true.
Princeton University | email:
----------------------| (NEW! IMPROVED!)