Re: Avoiding OOM on overcommit...?

From: Mirian Crzig Lennox (mirian@cosmic.com)
Date: Sat Mar 18 2000 - 01:11:59 EST


In article <Pine.BSF.4.05.10003171855400.9271-100000@bsd1.nyct.net>,
Michael Bacarella <mbac@nyct.net> wrote:
>
>No overcommit of VM. Yes! This would be HORRIBLE for performance, but ONLY
>if you're actually allocating the memory to the process. If you don't
>actually allocate it, but in fact just ensure that it could allocate it
>in the future, you take no performance hit and gain that much stability.

I agree with you Michael. This would make it possible to write
applications which are robust about memory allocation, since they will
know at malloc time whether they have the memory they requested and can do
intelligent things (e.g., purge buffers, alert the user, wait, etc.)
beyond merely catching SIGTERM and exiting gracefully.

Overcommitting memory is the moral equivalent of writing bad checks and
praying there will be money to cover them before they are cashed. It's
completely irresponsible-- and when it fails, it really bites down hard.

>While, yes, a 40 meg process that fork()s will have to make sure that
>there are 40 megs of VM free, it doesn't mean that COW cannot be used,
>and it does not mean all 40 megs will have to be paged in.

I know I'm going to get pelted with rotten tomatoes for saying so, but
if you're talking about fork+exec, vfork() was an obscenity but it did
handle the trivial case quite well.

>I trust that I will be corrected if I am a dumbass. :)

No, I think you're spot-on.

--Mirian

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:23 EST