> David Schwartz wrote:
> >>I would like to limit the maximum resident memory size
> >>of a process within a threshold, i.e. if its virtual
> >>memory footprint exceeds this threshold, it needs to
> >>swap out pages *only* from within its VM space.
> > Why? If you think this is a good way to be nice to other
> > processes, you're
> > wrong.
> Why is he wrong?
Because increasing the amount of swapping and paging will slow the system
down overall. Other processes will be interrupted more frequently and cache
effectiveness will decline. If the disks are shared, the additional disk
access will slow down other processes on the system as well.
It's also not clear how shared pages should be handled. If this process
causes large chunks of a shared library to be resident that wouldn't be
otherwise, should this be charged against the process or not? If you exempt
all shared memory, you not only create a whole a malicious process could
drive a truck through but you don't measure accurately.
If the process has a limited amount of work to do, it's much more sensible
to just let it get done using the memory it needs to run quickly so it can
get out of the way of other processes. If the process has an unlimited
amonut of work to do, it makes more sense to control its use of processor
resources, which will inherently limit its resident set size.
Basically, which pages should be resident is just one of those things the
system knows better than you. Trying to make things better for one process
may wind up making them worse as the system as a whole bogs down.
Overall, this just doesn't strike me as a sensible thing to do. Depending
upon what effect he's trying to achieve, there are probably more sensible
ways to do it.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Jun 15 2003 - 22:00:35 EST