Re: Overcommitable Memory...

From: Helge Hafting (helgehaf@idb.hist.no)
Date: Mon Mar 20 2000 - 03:06:06 EST


> I don't understand the whole situation.
>
> Why should you EVER be out of memory if you aren't out of swap in an OS
> that supposedly supports demand memory paging?
>
Because we *aren't* talking about an "out-of-RAM" problem. We are
talking
about an "out-of RAM+SWAP" problem. We are looking as virtual memory as
a whole here, the division into swap & RAM isn't interesting, it is
merely a performance issue.

> And then IF you really truely run out of memory including swap, at that
> point refuse additional requests for memory, including fork/exec'ing new
> processes rather than killing off existing processes.

You don't understand overcommitting. The problem is that some
techniques,
particularly fork() tries to allocate much more memory than it will ever
need.
So preventing fork() when there isn't memory to back it will severely
restrict
the number of processes you may run, or force you to install many
gigabytes of swap
that won't ever be used. This would effectively make linux unusable for
some tasks
on small machines. (I.e. hobbyist machines and those money-saving old
machines
corporations use with linux)
You can run your machine that way though, take a look at
/proc/sys/vm/overcommit_memory

A "safe" fork doubles the memory usage of the forking process, but it
turns out
that many fork()s only use a few pages. So overcommitting is a good
optimization
for common cases.

Helge Hafting

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:28 EST