Re: [PATCH 3/9] x86/intel_rdt: Cache Allocation documentation and cgroup usage guide

From: Marcelo Tosatti
Date: Wed Jul 29 2015 - 15:32:31 EST


On Wed, Jul 29, 2015 at 01:28:38AM +0000, Auld, Will wrote:
> > > Whenever cgroupE has zero tasks, remove exclusivity (by allowing other
> > > cgroups to use the exclusive ways of it).
> >
> > Same comment as above - Cgroup masks can always overlap and other cgroups
> > can allocate the same cache , and hence wont have exclusive cache allocation.
>
> [Auld, Will] You can define all the cbm to provide one clos with an exclusive area
>
> >
> > So natuarally the cgroup with tasks would get to use the cache if it has the same
> > mask (say representing 50% of cache in your example) as others .
>
> [Auld, Will] automatic adjustment of the cbm make me nervous. There are times
> when we want to limit the cache for a process independent of whether there is
> lots of unused cache.

How about this:

desiredclos (closid p1 p2 p3 p4)
1 1 0 0 0
2 0 0 0 1
3 0 1 1 0

p means part.
closid 1 is a exclusive cgroup.
closid 2 is a "cache hog" class.
closid 3 is "default closid".

Desiredclos is what user has specified.

Transition 1: desiredclos --> effectiveclos
Clean all bits of unused closid's
(that must be updated whenever a
closid1 cgroup goes from empty->nonempty
and vice-versa).

effectiveclos (closid p1 p2 p3 p4)
1 0 0 0 0
2 0 0 0 1
3 0 1 1 0

Transition 2: effectiveclos --> expandedclos
expandedclos (closid p1 p2 p3 p4)
1 0 0 0 0
2 0 0 0 1
3 1 1 1 0

Then you have different inplacecos for each
CPU (see pseudo-code below):

On the following events.

- task migration to new pCPU:
- task creation:

id = smp_processor_id();
for (part = desiredclos.p1; ...; part++)
/* if my cosid is set and any other
cosid is clear, for the part,
synchronize desiredclos --> inplacecos */
if (part[mycosid] == 1 &&
part[any_othercosid] == 0)
wrmsr(part, desiredclos);

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/