Re: Avoiding OOM on overcommit...?

From: James Sutherland (jas88@cam.ac.uk)
Date: Tue Mar 21 2000 - 19:10:58 EST


On Tue, 21 Mar 2000 08:49:44 -0600 (CST), you wrote:
>David Whysong <dwhysong@physics.ucsb.edu>:
>> On Mon, 20 Mar 2000, Jesse Pollard wrote:
>> >On Mon, 20 Mar 2000, Horst von Brand wrote:
>> >>Jesse Pollard <pollard@tomcat.admin.navo.hpc.mil> said:
>> >>
>> >>[...]
>> >>
>> >>> Without overcommit:
>> >>> You can support 100 simultaneous connections, with full saturation of
>> >>> each server, with no failures.
>> >>>
>> >>> Result: happy customers, happy management, maybe even a raise.
>> >>
>> >>Management nagging about supporting more pages, worried by waste of several
>> >>Gb of disk that has never, ever been touched. System is sluggish, needs
>> >>constant tweaking of "resource allocation quotas" as applications crash,
>> >>even there are resources available. Seriously consider firing inept
>> >>sysadmin.
>> >
>> >Disk space is cheap.
>>
>> And completely orthogonal to overcommit. You put the same disk space on
>> the overcommitted machine and it will work fine.
>
>NOPE - runaway memory allocators can still cause OOM crashes.

No - a system crash is a kernel bug. Fix that, rather than avoiding
the circumstance under which it appears. Besides - you can run out of
memory and crash due to a kernel bug with or without overcommit. You
will do so MORE often without overcommit.

>> >System is no more sluggish than currently, and is much more reliable.
>>
>> Yes, because you added more memory.
>
>NOPE - additional swap space does not make it more sluggish in and of itself.
>I guaranteed the space that is allocated to processes. If the processes don't
>use it, there is no additional overhead. If the processes DO use it then they
>won't be aborted by OOM conditions.
So? This is true of any additional swap. If it is available, the
process can use it. If the process needs more memory than is
available, it fails.

>If you prefer a crash to degraded performance, you are welcome to it. I want
>a way to control it.
You keep blaming this "crash" of yours on running out of memory. If
the kernel crashes/reboots when short of memory, that's a seperate bug
with the kernel. Nothing to do with overcommit.

>> The correct solution to OOM situations is not to turn off overcommit. The
>> correct solution is to put more memory in the machine.
>
>With the current state of Linux, no matter how much memory you put, you
>will still get OOM situation whenever processes want more memory than
>you have. You can't avoid it. You can't prevent it. You can't control it.

You will ALWAYS get an OOM situation when you use up all the memory in
the system. Nothing to do with overcommit. Yes, extremely restrictive
user quotas would prevent this. So what?

James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:35 EST