Re: Overcommitable memory??

From: James Sutherland (
Date: Thu Mar 16 2000 - 06:01:02 EST

On 15 Mar 2000, Rask Ingemann Lambertsen wrote:

> Den 14-Mar-00 00:26:19 skrev James Sutherland følgende om "Re: Overcommitable memory??":
> > On 13 Mar 2000, Rask Ingemann Lambertsen wrote:
> >> Den 13-Mar-00 12:54:31 skrev David Whysong følgende om "Re: Overcommitable
> >> memory??":
> >>
> >> > Ok, so my big gravitational simulation gets NULL from malloc and decides
> >> > to save it's work and exit. Uh-oh, time to demand-load a page of
> >> > executable code that had been discarded, so we can save the data. Hmm,
> >> > but we're out of memory...
> >>
> >> Without overcommit that just can not happen. There will be either a free
> >> page of memory or a free page of swap into which you can swap something
> >> else out.
> > Without overcommit, it just runs out of memory much earlier - even though
> > there is free memory available, that memory might have been used by
> > something else, so it isn't available to the simulation.
> What about the program that doesn't allocate memory after it
> initialises? There is no good reason that an OOM situation should affect it
> at all after initialisation. But it will affect such programs as long as
> you can't completely disable overcommitment of memory.

How? If it has just been loaded from disk, and it has got all the memory
it will need (not COW shared with other processes) then it will never
experience failed memory allocation. If it is taking too much VM, it may
be killed to free up resources, until we can use memory usage deltas to
select the process to kill. There is, however, NOTHING "overcommitted"

> >> > Even if that succeeds, or there is a foolish "no overcommit" policy, we
> >> > need disk buffers. What if the program was told to save output to a SCSI
> >> > device, and the kernel needs to load the driver module? We're out of
> >> > memory!
> >>
> >> This is purely an administrative/user problem, not a kernel issue. Try
> >> as you might, you will never make a foolproof system. It is also somewhat
> >> of a hypothetical example, isn't it? Normally the module would be locked
> >> into memory because the file system would keep it open.
> > Yes - but it will almost certainly allocate some memory dynamically. It
> > could, for example, be saving to an automounted NFS home directory. Are
> > you going to keep enough RAM free to handle that at all times? How?
> If the disk dies under you, tough luck. If you use NFS, you accept that
> risk. Btw, according to linux/Documentation/filesystems/proc.txt, by
> default some memory is reserved for allocation by the kernel, so yes, we do
> have such memory at all times. Search for "freepages".

Yes, I know there is SOME memory - but is it enough for ANY operation to

> >> Please realize that the only gain from overcomitting memory is that yoy
> >> may get away with having less swap space. The downside you get is random
> >> program crashes, lost work, etc.
> > Well, your approach would completely destroy any possibility of using
> > Linux for WWW servers, for example.
> It would do no such thing. What makes you think so?

Apache depends entirely on COW (overcommit) to fit in memory. Without it,
my 100 simultaneous HTTP sessions of a 20Mb daemon would take 2Gb of VM.
This is not practical. WITH COW overcommit, however, it's nearer 30Mb.

Besides, disabling COW overcommit does NOT avoid OOM situations - it
CAUSES them!

Yes, enabling /proc/sys/vm/overcommit can lead to strange failures in OOM
situations - so what? You would still have the problem, it just has
different symptoms.

> > It would also increase memory wastage
> > enormously when every single shell spawned has to be copied individually,
> > instead of being COW mapped.
> [cut]
> Pay attention, please. Nobody suggested that COW should be disabled. COW
> is a useful performance optimisation that doesn't not hurt anything.

> > All your idea really does is increase memory consumption enormously, and
> > make memory problems much more frequent.
> Not quite "all". The important part is to make Linux work. Currently,
> even with /proc/sys/vm/overcommit_memory set to zero, Linux kills perfectly
> well behaving programs in OOM situations in true Windows style.

Yes - because you are still running out of memory. However you used the
memory in the first place, you have now run out of it, and something has
to die to remedy that.

> > If you REALLY want this on your machine, try hunting for an early Linux
> > kernel. 1.0.x has most of the "features" you are after, I think?
> Stop trolling. If you think it is OK for programs to be killed left,
> right and center when memory runs low, go ahead and put
> echo 1 >/proc/sys/vm/overcommit_memory"
> into your startup scripts and be happy, OK?

The problem is not overcommitted memory, but simply *USED* memory. Once
you have allocated all your memory to processes, subsequent memory
allocation attempts will fail. This causes processes to die.

As it happens, I do have this set to 0. However, out of memory conditions
can and will still occur, and processes will die as a result. This has
nothing to do with overcommit, COW or anything else - it's just run out of
this particular resource.


To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at

This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:19 EST