Avoiding OOM on overcommit...?

From: Michael Bacarella (mbac@nyct.net)
Date: Fri Mar 17 2000 - 19:09:13 EST


This was mentioned already, but I haven't seen a really decent reason for
why it's not a good idea. Therefore, I'm giving it more airtime.

Solution?

No overcommit of VM. Yes! This would be HORRIBLE for performance, but ONLY
if you're actually allocating the memory to the process. If you don't
actually allocate it, but in fact just ensure that it could allocate it
in the future, you take no performance hit and gain that much stability.

This would of course mean that you'd need a LOT of s/low priority virtual
memory "just in case", but the worst case (where all of the committed is
actually used) would be rare and only happen in OOM situations.

While, yes, a 40 meg process that fork()s will have to make sure that
there are 40 megs of VM free, it doesn't mean that COW cannot be used,
and it does not mean all 40 megs will have to be paged in.

I would gladly take this option on a production server. I can easily
sacrifice slow IDE drives to the VM cause and NFS mount more swap
space if need be, without having to worry about random tasks dying
if the kernel finds itself low on memory it told a process that it had an
infinite amount of.

Note, I am saying "OPTION". This should not be the default behavior, but
there's certainly no reason why it can't be an option for those of us who
will run Linux with swap, on a production machine.

I trust that I will be corrected if I am a dumbass. :)

Michael Bacarella

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:22 EST