Re: Avoiding OOM on overcommit...?

From: Linda Walsh (law@sgi.com)
Date: Sun Mar 26 2000 - 18:27:45 EST


John Ripley wrote:
> The whole point is: in a system without overcommit, a process is never
> killed for overcommitting.

---
	Exactly.  *Deciding* what processes to kill is an administrative
*policy*.  It shouldn't be an automatic kernel function except as an extension.

I feel safer running an application on a system that I know will either deny resources that aren't present or suspend a process that can't be given a denial. Then I know what is happening in my "state machine".

I'm trying to think about what cases would need to be caught to enable this type of integrity.

On a fork, COW pages would have to be 'committed'. This implies we need to have a committed page count in addition to a free page count. Same for pages committed by brk/sbrk. Stack -- we'd need to decide on behavior -- maybe the decision is you start with 1 page, and if you run out of free pages and need more stack we use a signal. Default is to die. User can set to ignore, in which case, the process is suspended until its request can be fulfilled -- go to sleep waiting for free memory. Optionally, program developer could specify a default stack size at link time that would 'commit' the specified memory.

If we get the bookkeeping correct, then to add 'virtual swap', we just bump the 'free' count above what is physically available. Same behavior as above if you are running overcommitted and you run out of memory.

What other system calls/functions overcommit? -l -- Linda A Walsh | Trust Technology, Core Linux, SGI law@sgi.com | Voice: (650) 933-5338

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Mar 31 2000 - 21:00:18 EST