Horst von Brand writes:
> Olaf Weber <email@example.com> said:
>> Horst von Brand writes:
>> > Olaf Weber <firstname.lastname@example.org> said:
>> > [...]
>>> Many architectures don't know of read-only data, so you must
>>> consider them writeable anyway.
>> If you can implement copy on write, you can implement abort on
>> write. And if you cannot implement copy on write, how do you
>> implement overcommitment?
> COW is then implemented by marking the page as non-available, when
> accesed you fault and look at what is going on.
So the space is bought at the expense of faulting for every _read_ of
shared data. Very expensive in terms of time, unless you unshare data
on reads as well as writes.
>> In terms of risk it makes a real difference if you overcommitted 100MB
>> of de-facto read-only COW memory, or 10MB of de-facto read-write
>> memory: the latter commitment is the one that kills.
> Something like that should be put into the kernel. Problem is, how do you
> diferentiate sanely between good and bad risks here? Basic problem is that
> you get _potentially_ writable COW memory, and no way to know if it will be
> written or not, or even just discarded the next few instructions ahead.
Do some extra bookkeeping and use heuristics: for example, when doing
a fork you can notice that N pages are in segments that typically not
written to (text, rodata) and M pages in segments that are. The M
then weighs heavier than the N when the assumed risk is computed.
In a similar vein, the M would weigh more and more as the forked
process ages, because it becomes less likely that it will do an exec,
and more likely that the pages will be written to.
>> In general, my point is just that some advocates of overcommitment
>> exaggerate the amount of extra VM that a non-overcommitting system
>> would require. An OOM situation is bad on both systems, though I
>> believe programs have a better chance of coping with it on a system
>> that doesn't overcommit.
> Try it once, you'll be very surprised. Some of the first machines I
> laid my hands on had no COW, so no memory overcommit. They had
> roughly the same amount of RAM + swap as later overcommiting
> machines I've got, which could stand much higher loads (and needed
> it badly, as processes became larger and CPUs faster, and thus more
> simultaneous users).
Except that without COW they probably had little or no sharing of
read-only memory either, and therefore required comparatively larger
amounts of memory than non-overcommitting system on capable hardware
needs. The proper comparision between the two is on the same
hardware, with the non-overcommitting version optimized not to waste
>> If you have the infrastructure necessary to prevent overcommitment,
>> you are also in a position to allow it to a limited extent, or in
>> certain special cases. Without that infrastructure, no-one has a
> The problem here is the cost (of the infrastructure, and its impact
> of performance) vs. what it buys you. My argument is that it buys
> very little (if anything) for a sizeable cost.
What it buys you is a system that when OOM behaves in a way that is
IMHO cleaner than what happens when it has overcommitted. Or it buys
you a system where you can tune the extent to which you're willing to
The bookkeeping incurs some space and time overhead. It also saves
a tiny bit of time when a COW page has to be copied because the space
reservation has already been done.
You'll probably need some additional swap space, but the amount of
additional space is probably not that large. Of course we do not
_know_ how much, because currently the bookkeeping isn't done that
would allow us to determine this.
-- Olaf Weber
Do not meddle in the affairs of sysadmins, for they are quick to anger and have no need for subtlety.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to email@example.com Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Thu Mar 23 2000 - 21:00:27 EST