Re: Process Aggregates: module based support for jobs

From: Andrey Savochkin (
Date: Tue Jun 20 2000 - 08:32:17 EST

Hello Sam,

Sorry for the delayed answer.

First of all, I want to announce that I've put a small program to play with
the user beancounter patch and watch how it works in real life.
You may start your favorite application (or set of applications) from it and
check how much kernel memory is used to keep them running.

The document describing the patch
is also updated to include instructions how to test it.

On Sat, Jun 17, 2000 at 02:42:46PM -0500, Sam Watters wrote:
> I will be very interested to see how the memory limits are handled.

Considering memory, we may have two goals.
First of all, we have the really limited resources:
 - unswappable memory (limited by the physical computer memory); and
 - total available memory (physical memory plus swap).
The first resource is of the very primary concern. User may easily
cause DoS by exhausting it, either intentionally or not.
It was the main focus for the initial stage of patch development.

The second resource of is less important, because its shortage doesn't cause
immediate hang of the whole system, but allocation fails and then, if it
doesn't help, killing of some processes (the usual OOM procedure).
I accounted for total memory, but in a very rough way, without proper handling
of shared pages. Now I have some ideas how it may be improved.

The second goal for management of memory resource is application performance.
Certainly, pages in memory are much more efficient for access than swapped
out. So, we should have some mechanism of fair (or, at least) controllable
swap-out to disable attacks where one user just trashes memory and other
users suffer much because pages of their tasks are heavily swapped-out.
I'm developing such a mechanism at this moment.

Basically, I want to provide processes some guarantees that their pages are
not swapped out if they are "well-behaved" and stay under their limits.
I consider hard upper limits for resident size as senseless, because it's not
really an exhaustible resource (swap is designed right to make it so).
Moreover, upper limits do not provide you true quality of service.
If all subjects of accounting are staying under their limits, there still may be
troubles because overcommitment is a typical (and cost-performance efficient)
policy. Contrary to upper limits, guaranteed minimal amount of resources
provides you quality of service for "well-behaved" subjects independently of
(mis-)behavior of others. That's how I'm approaching to memory management.

Best regards
                                        Andrey V.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
Please read the FAQ at

This archive was generated by hypermail 2b29 : Fri Jun 23 2000 - 21:00:19 EST