Re: Overcommitable memory??

From: Khimenko Victor (khim@sch57.msk.ru)
Date: Mon Mar 13 2000 - 19:12:45 EST


In <3050.107T650T8914010rask@kampsax.k-net.dk> Rask Ingemann Lambertsen (rask@kampsax.k-net.dk) wrote:
> Den 13-Mar-00 12:54:31 skrev David Whysong fœlgende om "Re: Overcommitable memory??":
>> On 13 Mar 2000, Rask Ingemann Lambertsen wrote:

>>>Apps would be told that the system is out of memory instead of just
>>>getting a SIGKILL'ed out of the blue sky. Apps getting NULL from
>>>malloc() can react appropriately, such as saving your files to disk,
>>>trying again a little later or just exiting if that is acceptable for
>>>what the app was doing. Apps getting SIGKILL will take your unsaved work
>>>with them in the fall.

>> Ok, so my big gravitational simulation gets NULL from malloc and decides
>> to save it's work and exit. Uh-oh, time to demand-load a page of
>> executable code that had been discarded, so we can save the data. Hmm, but
>> we're out of memory...

> Without overcommit that just can not happen. There will be either a free
> page of memory or a free page of swap into which you can swap something
> else out.

No. There WILL NOT be anything available in worst-case scenario. Unless by
not overcommiting you mean "keep 3GiB of virtual memory for each task" and
THIS solution is clearly unappropriate.

>> Even if that succeeds, or there is a foolish "no overcommit" policy, we
>> need disk buffers. What if the program was told to save output to a SCSI
>> device, and the kernel needs to load the driver module? We're out of
>> memory!

> This is purely an administrative/user problem, not a kernel issue. Try
> as you might, you will never make a foolproof system. It is also somewhat
> of a hypothetical example, isn't it? Normally the module would be locked
> into memory because the file system would keep it open.

Ok. Filesystem can need buffer :-) "You will never make a foolproof system"
is exactly right. And without overcommiting many currently working system
will die with "Out of memory" message out of syslogd or init ...

>> Even if we all build non-modular kernels, the kernel does some
>> dynamic memory allocation.

> I do not know enough about the kernel to answer that one.

It does. All the time. It tries to keep used memory smal, obviously but it's
really hard to predict how much kernel will need.

>> As for "trying again a little later", that leaves you with an unresponsive
>> and unusable system in many cases.

>> And please explain why my simulation -- that may have started many weeks
>> (or months) ago -- should "just exit" because some random 5-minute old
>> Mathematica process went and allocated half a gigabyte of memory?

> You missed the part where I wrote "react appropriately". What is
> appropriate depends on which function the application has. Notice that
> _with_ overcommitment of memory, your simulation that started weeks/months
> ago will be "just killed" because the Matematica process needed half a
> gigabyte of memory. The difference being that with overcommitment of
> memory, you lose all that work. Without overcomitment of memory, you can
> just restart the simulation from the files that were saved.

> Please realize that the only gain from overcomitting memory is that yoy
> may get away with having less swap space. The downside you get is random
> program crashes, lost work, etc.

Exactly like without overcommiting. So
  1) with overcommiting you get random program crashes, lost work, etc.
  2) without overcommiting you get random program crashes (just MERGINALLY less
random), lost work, etc and you need MUCH more swap.

What do you prefer ?

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:26 EST