Re: Overcomittable memory (Was: Linux 2.2.15pre12)

From: James Sutherland (jas88@cam.ac.uk)
Date: Mon Mar 13 2000 - 18:13:50 EST


On Mon, 13 Mar 2000, Jamie Lokier wrote:

> James Sutherland wrote:
> > If you REALLY want to allow for the worst case: Each process has an
> > address space of 3 Gb (+1Gb kernel). Your method would require, for a
> > sytem with a hard limit of 1024 processes, a minimum of 3Tb swap+RAM. You
> > can try this if you want, but I don't think many people will follow this.
>
> :-)
>
> Obviously the problem is using Apache and mod_perl -- the most useless
> possible examples.
>
> More appropriate examples are processes that don't allocate any memory
> one they have started -- they just run. Control systems and the like.
> I can't quite think which ones off hand. Maybe an Occam interpreter :-)
>
> Anyway, if you have those running, preventing overcommit, for those
> processes at least, means they will not be terminated by out of memory
> conditions. You don't need 3Gb in that case.
>
> Admittedly, on a typical Linux box programs that don't allocate memory
> (or call fork or use kernel services that allocate memory) are few and
> far between.

Not really the point. You were trying to use the worst case memory usage -
which, in reality, is the entire address space. The present approach -
tracking the current, real memory consumption - is the most appropriate.
On anything other than a massively over-specced machine (>3Gb RAM+swap)
even a SINGLE process (say, a leaking init) can trigger an OOM condition.

In other words, we will ALWAYS be able to run out of memory. Whatever
level you set this limit at, sooner or later (much sooner with your
approach) we are no longer able to allocate memory. At this point, the
system is in a dangerous state - processes will start dying. (Worse still,
they may simply linger, unable to operate.)

The best approach is: Maximise the memory we can provide, using COW etc.
If we do eventually run out of memory (in a worst case scenario normally,
or about half way through the init scripts with your approach) we must
deal with it in some way.

OK, we're out of memory. What's the best approach? We need to free some;
we've already freed all the buffers we can, exhausted available swap
space, etc. The only remaining option is to (a) kill random processes by
denying all malloc() requests, or (b) kill specific processes explicitly.

IMO, (b) is clearly the better of the two.

You may wonder why I assume processes should die when unable to allocate
memory. Here's why:

syslog tried to allocate a small buffer to handle incoming data. It fails.
Options: (1) Silently discard all incoming data, invisibly becoming a
bitbucket. (2) Exit, making it clear there is now no syslog facility, and
hoping that we will be respawned later, once the critical problem (which
this is) has been resolved.

Again, (2) seems the more appropriate.

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:27 EST