Re: RSS Limit implementation issue

From: Bill Davidsen
Date: Thu Mar 30 2006 - 22:24:07 EST


Peter Chubb wrote:

"Bill" == Bill Davidsen <davidsen@xxxxxxx> writes:



Bill> Ram Gupta wrote:

Bill> If you want to make rss a hard limit the result should be
Bill> swapping, not failure to run. I'm not sure the limit in that
Bill> form is a good idea, and before someone reminds me, I do
Bill> remember liking it better a few years ago.

Bill> If you can come up with a better way to adjust rss to get better
Bill> overall greater throughput while being fair to all processes, go
Bill> to it. But in general these things are a tradeoff, like
Bill> swappiness, you tune until the volume of complaints reaches a
Bill> minimum.

What I did in one experiment was to:
1. delay swapin requests if the process was over its rsslimit,
until it fell below, and
2. Poke the swapper to try to swap out the current process's
pages in that case.

The problem with the approach is that it behaved poorly under memory
pressure. If a process's optimum working set was larger than its RSS
limit, then either it was delayed to the point of glaciality, or it
could saturate the swap device (and so disturb other processes's
operation).



I'm paying close attention, but that's kind of the problem people have, memory pressure gets high and the processes don't run. I thought of "swap out two to swap in one" as a way to dribble the rss down, but I doubt it's a magic solution. Swap kills, so does not swap and no memory. Obvious solution is to make the rss limit hard and keep it small, I don't think that's the answer.

--
bill davidsen <davidsen@xxxxxxx>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/