Re: Avoiding OOM on overcommit...?

From: Marco Colombo (marco@esi.it)
Date: Mon Mar 27 2000 - 05:18:43 EST


On Sun, 26 Mar 2000, Linda Walsh wrote:

> David Whysong wrote:
> > If you run out of a resource, the system should not crash. The kernel just
> > has to free up the resource. A convenient way of doing that is to kill a
> > user process.
> ---
> Then you have violated the integrity of the user-process space.
> Tell me, which processes are killed when the system runs out file descriptors?
> How about processes? Disk space? Why are you treating memory differently?

Because overcommitting swap space is possible and it works (higher
system throughput, on average case).

Please explain how you'd implement 'overcommitting' of PIDs, or open files.

And BTW, the system is not 'overcommitting' anything. The kernel is just
extending VA space of a process. It's giving you more "names". It's not
giving you any object. VA space is not "memory".

Offsets in a file are just "names". That's why you can write one block
at 1GB offset, without allocating 1GB of disk space. Is this overcommiting
of FS space? Just because the system lets you create a file which is
1GB in *size* (but not in disk usage), it doesn't mean you have 1GB of
free disk space. And just because the system lets you malloc(1GB) of
address space, and even lets you successfully write at 1GB-1 offset,
it doesn't mean there's 1GB of free RAM or swap. Why are you treating
memory differently from disk space?

>
> -l
>
> --
> Linda A Walsh | Trust Technology, Core Linux, SGI
> law@sgi.com | Voice: (650) 933-5338
>

.TM.

-- 
      ____/  ____/   /
     /      /       /			Marco Colombo
    ___/  ___  /   /		      Technical Manager
   /          /   /			 ESI s.r.l.
 _____/ _____/  _/		       Colombo@ESI.it

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:19 EST