Re: Some questions about linux kernel.

From: Jesse Pollard (pollard@cats-chateau.net)
Date: Sat Mar 18 2000 - 08:52:29 EST


On Fri, 17 Mar 2000, James Sutherland wrote:
>On Fri, 17 Mar 2000, Rik van Riel wrote:
>
>> On Thu, 16 Mar 2000, Paul Jakma wrote:
>> > On Thu, 16 Mar 2000, James Sutherland wrote:
>> >
>> > > No. *ANY* memory allocation system can run out of memory. Avoiding
>> > > "overcommitting" would make the OOM situation arise SOONER (and more
>> > > frequently), as well as killing performance.
>> >
>> > well, it's a more a question of whether you make promises that you
>> > might not be able to keep. If you do (ie overcommit) then it's your
>> > (kernel) problem. If you don't, it's not.
>>
>> > > So his processes STILL end up dying randomly, but they do it sooner rather
>> > > than later. Hrm. Wow.
>> >
>> > but then it's the users problem - not the OS..
>>
>> Erhmm, sorry that I have to say this Paul, but this
>> sounds like non-overcommit is about shifting the blame
>> and not about fixing the problem...
>
>In fact, it makes the problem worse.

It makes it better. The specific process causing the problem is now
identified. Especially if resource allocation and quotas are used.
Nearly every program that is run has a deterministic amount of
memory required. In the cases that the amount of resources is not
sufficient then the specific process has broken/changed the
assumptions used in computing the amount. This allows for examining
why that partitcular process exceeded its quota. If the determiniation
is made that more resources are required, you have a justification for
purchasing more memory and/or swap space. Without that accounting cycle
it becomes "gee - linux crashed again.... lets get something that won't..."

>
>> Knowing all of this, would you still want a non-overcommitting
>> system where you need more swap just to run out of memory sooner?
>
>This is the core problem - it does not cause any improvement w.r.t.
>running out of memory. It does avoid some unrelated theoretical problem -
>but that's not the point here.

This is a matter of implementation. It is not a "theoretical problem" but
a practical application of resource accounting and quota control.

>> If a renegade application steals all memory, it will cause
>> other (innocent) applications to die. Unless you have some
>> code in the kernel which is able to shoot down the renegade
>> process and save the rest.
>
>Quite.
>
>> (non-)overcommit isn't going to make any difference here...
>
>I disagree with this bit - it will make the problem worse on any given
>configuration. For example, forking a load of Apache processes in 64Mb of
>RAM, no swap is fine with overcommit - but will cause OOM problems without
>overcommit. Certainly no improvement!

wrong. It will make it possible to identify the offender, and to prevent
it from impacting other processes. Many large multiuser systems run this
way, and do NOT have the OOM problem. Implementation is not easy, and the
accounting practices have to be clearly documented. BUT it is worth it.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@cats-chateau.net

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:24 EST