Re: Endless overcommit memory thread.

From: Grendel (grendel@vip.net.pl)
Date: Sun Mar 26 2000 - 18:44:17 EST


** On Mar 26, Linda Walsh scribbled:

> > Therefore all known Unix Operating Systems allow the use of
> > 'currently unused' virtual RAM. This RAM may have been allocated, but
> > it has not been touched yet. The tasks that have allocated it will
> > not even know that it was 'borrowed'. This is the nature of
> > "over commit".
> ---
> Not all unix systems, however, allow it by default. Some have
> realized that allocating memory with no "assurance" lowers the overall
> integrity/security of the system. Some require that an administrator
> explicitly allow the unsafe behavior after they have learned about the
> implications.
>
> By allowing 'overcommit', you end up in a situation where now your
> 90M process touches the last of its memory -- and it isn't there, so....
> it SEGFAULT's because the memory mapper can't the address to a physical \
> object?
And how about putting a guard page on the boundary of actually used memory
from the reserved pool? The initial placement of the guard page would be
completely arbitrary and tunable (say, 1/3 of the requested pool size unless
it exceeds the available virtual memory, in which case we grant the 1/3 of
the available mem). On the page fault the kernel either grants more memory
to the process or a) kills it as a memory hog (bad) or b) signalls it that
it has no more memory despite it previously reserved it. That would
guarantee that the process won't die unexpectedly but gracefully quit
instead. The signal can be handled in the libc making the process
transparent for (most) applications.

> If you allow overcommit, you can have *either* the SEGFAULT behavior
> *or* the no-mem/locked out (process touches everything up front). If you
> disallow overcommit, you reduce the possible behaviors to 1. The problem is
> that suppose the process allocates it's memory -- with the 'overcommit'
> you have a system where it is hard to predict *what* will fail. Will it be
> a system process? A deamon? It's a low-integrity system because you can't
> figure out the behavior of failure in advance.
And what if the processes are prioritized in respect of memory? The memory
blocks are either "reserved", "allocated" or "free". The processes with
higher priorities can take more memory from the "reserved" pool even though
the available memory pages could have been reserved by another application?
Kernel would have the highest priority thus preempting other tasks
memory-wise. Is it a sensible idea?

marek



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:18 EST