Re: Process Aggregates: module based support for jobs

From: Andrey Savochkin (saw@saw.sw.com.sg)
Date: Fri Jun 23 2000 - 04:55:54 EST


Hello,

On Thu, Jun 22, 2000 at 10:42:29AM -0300, Rik van Riel wrote:
> I have a plan on how to make this work. A (very rough) draft
> of the idea is on:
>
> http://mail.nl.linux.org/linux-mm/2000-06/msg00296.html
>
> Basically we want to:
> 1) try and make an educated guess as to how big the actual
> working set of each process is
>
> 2) normalise the rss guarantees / limits to make sure that:
> - the sum of all rss guarantees never exceeds physmem/N
> - no rss limit is above ulimit.rss
> - a user does not exceed his/her rss quota
> - a user does not get below his/her rss guarantee

I see that there are no argues about introducing rss guarantees, that's fine.

However, I'm not sure what really you may achieve by collecting statistics
and dynamically set RSS limits.
Memory balancing should naturally lead to the same effects as described in
the pointed message. LRU page replacing should keep memory for processes
that actively and randomly access pages, and release pages for processes that
do streamline work because they return to previous pages after larger time
intervals (or never).

Best regards
                                        Andrey V.
                                        Savochkin

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Fri Jun 23 2000 - 21:00:25 EST