Re: Avoiding OOM on overcommit...?

From: James Sutherland (jas88@cam.ac.uk)
Date: Sun Mar 19 2000 - 16:31:05 EST


On Sat, 18 Mar 2000 07:39:23 -0600, you wrote:

>On Fri, 17 Mar 2000, Michael Bacarella wrote:
>>This was mentioned already, but I haven't seen a really decent reason for
>>why it's not a good idea. Therefore, I'm giving it more airtime.
>>
>>Solution?
>>
>>No overcommit of VM. Yes! This would be HORRIBLE for performance, but ONLY
>>if you're actually allocating the memory to the process. If you don't
>>actually allocate it, but in fact just ensure that it could allocate it
>>in the future, you take no performance hit and gain that much stability.
>>
>>This would of course mean that you'd need a LOT of s/low priority virtual
>>memory "just in case", but the worst case (where all of the committed is
>>actually used) would be rare and only happen in OOM situations.
>>
>>While, yes, a 40 meg process that fork()s will have to make sure that
>>there are 40 megs of VM free, it doesn't mean that COW cannot be used,
>>and it does not mean all 40 megs will have to be paged in.
>>
>>I would gladly take this option on a production server. I can easily
>>sacrifice slow IDE drives to the VM cause and NFS mount more swap
>>space if need be, without having to worry about random tasks dying
>>if the kernel finds itself low on memory it told a process that it had an
>>infinite amount of.
>>
>>Note, I am saying "OPTION". This should not be the default behavior, but
>>there's certainly no reason why it can't be an option for those of us who
>>will run Linux with swap, on a production machine.
>
>This IS a solution, though not an "optimal" one. I'm in favor of having
>resource controls that allow tha administrator to allocate resources,
>and be able to know how much memory/swap is required. Still allowing
>overcommit as an option.

Again, you seem to assume that resource limits and overcommit are
mutually exclusive. In fact, they are orthogonal; for most situations,
I would want BOTH. I want to be able to set limits on resource
consumption by users, and have demand-allocated VM.

>I have no problem with overcommit on a workstation system that is
>dedicated to one user. It is truly horrible to do that to a server,
>whether it is for web usage, mail, DNS, disk, or multiple users.

How is demand-allocation of VM a "horrible" thing to do? Flexible,
demand-allocated memory is, IMO, vastly preferable to semi-static
worst-case allocation.

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:27 EST