On the issue of low memory situations

From: Nicholas Vinen (hb@sonique.com)
Date: Fri Mar 17 2000 - 00:46:15 EST


        I can't pretend to be an expert but I'd like to throw a comment or two into
the overcommitment and out of memory issues being discussed.

        First, with overcommitment, a process could find itself going to access
memory to write to it and there's no space to commit to its allocation any
more. In this case, I can think of a few things that could happen. The write
could fault. The write could block until that memory became available. The
allocation of the page could even block if memory is low, rather than out. I
think being able to select this behaviour is going to be important, in
combination perhaps with a userland daemon.
        For example, if root could allocate one of these three behaviours to each
process, then a system could work like this: the kernel has its usual extra
pages allocated for itself. Important processes like init, cron, kflushd
etc. could be able to exhaust all of memory and sleep when it's out (would
any of these sleeping cause a system to die?). Perhaps some processes like X
would also sleep rather than segfault if it's out, but it might also be
smart enough to close down if it notices that there's not enough memory left
for it to do much more than that. Most user programs would just fault if
they came too close to exhausting memory.
        Then, your userland daemon which allocates all its memory at init would sit
there and take down processes which misbehaved - any time that the minimum
free memory is reached, a heuristic is used to determine a process to kill
by a combination of its memory usage, its owner, its memory "priority"
(perhaps another assignable value similar to low memory behaviour) and
perhaps a specific rule as well. This way, when I run gcc on some file
that's so complicated that it tries to take up all my RAM and swap space to
compile, before it manages to achieve this it's going to fault. My apache is
probably going to be put to sleep if it gets too close to exhausting memory,
at which point the daemon is going to kill off anything of lower priority.
Maybe nothing else is running at lower memory priority in which case apache
will continue to sleep and eventually be killed if nothing else can and the
memory is needed. Even when most of memory is exhausted, cron and other
important processes should still have enough room to continue.

        This certainly seems like a much more acceptable alternative to me than
what currently happens. E.g. I did have this gcc problem I mentioned, gcc
used all memory and then faulted, leaving X in an unusable state. If X had a
higher "memory priority" than gcc (as it probably should) then I wouldn't
have had this problem.

        What do you think? OOM is not unavoidable and overcommitment is pretty much
a requirement. But I don't see any reason that when overcommitted allocation
is honored, it necessarily has to be done down to the last page in the
system for any process. It seems to me that this is the main cause of the
problem. And perhaps some processes would rather wait for memory to become
available than die.

                Nicholas Vinen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:20 EST