Re: [PATCH v4 13/18] x86/intel_rdt: Add mkdir to resctrl file system

From: Fenghua Yu
Date: Mon Oct 17 2016 - 18:15:59 EST


On Mon, Oct 17, 2016 at 11:14:55PM +0200, Thomas Gleixner wrote:
> On Fri, 14 Oct 2016, Fenghua Yu wrote:
> > +/*
> > + * Trivial allocator for CLOSIDs. Since h/w only supports a small number,
> > + * we can keep a bitmap of free CLOSIDs in a single integer.
> > + *
> > + * Please note: This only supports global CLOSID across multiple
> > + * resources and multiple sockets. User can create rdtgroups including root
> > + * rdtgroup up to the number of CLOSIDs, which is 16 on Broadwell. When
> > + * number of caches is big or number of supported resources sharing CLOSID
> > + * is growing, it's getting harder to find usable rdtgroups which is limited
> > + * by the small number of CLOSIDs.
> > + *
> > + * In the future, if it's necessary, we can implement more complex CLOSID
> > + * allocation per socket/per resource domain and utilize CLOSIDs as many
> > + * as possible. E.g. on 2-socket Broadwell, user can create upto 16x16=256
> > + * rdtgroups and each rdtgroup has different combination of two L3 CBMs.
>
> I'm confused as usual, but a two socket broadwell has exactly two L3 cache
> domains and exactly 16 CLOSIDs per cache domain.
>
> If you take CDP into account then the number of CLOSIDs is reduced to 8 per
> cache domains.
>
> So we never can have more than nr(CLOSIDs) * nr(L3 cache domains) unique
> settings. So for a two socket broadwell its 32 for !CDP and 16 for CDP.
>
> With the proposed user interface the number of unique rdtgroups is simply
> the number of CLOSIDs because we handle the cache domains already per
> resource, i.e. the meaning of CLOSID can be set independently per cache
> domain.
>
> Can you please explain why you think that we can have 16x16 unique
> rdtgroups if we just have 16 resp. 8 CLOSIDs available?

To simplify, we only consider CAT case. CDP has similar similar situation.

For a two socket broadwell, we have schemata format "L3:0=x;1=y"
suppose the two cache ids are 0 and 1, and x is cache 0's cbm and y is
cache 1's cbm. Then kernel allocates one closid and its cbm=x for cache 0
and one closid and its cbm=y for cache 1.

So we can have the following 16x16 different partitions/rdtgroups.
Each partition/rdgroup has its name and has its own unique
closid combinations on two caches. If a task is assigned to any of
partition, the task has its unique combination of closids when running on
cache 0 and when running on cache 1. Belowing cbm values are example values.

name schemata closids on cache 0 and 1 allocated by kernel
---- -------- --------------------------------------------
(closid 0 on cache0 combined with 16 different closid on cache1)
part0: L3:0=1;1=1 closid0/cbm=1 on cache0 and closid0/cbm=1 on cache1
part1: L3:0=1;1=3 closid0/cbm=1 on cache0 and closid1/cbm=3 on cache1
part2: L3:0=1;1=7 closid0/cbm=1 on cache0 and closid2/cbm=7 on cache1
part3: L3:0=1;1=f closid0/cbm=1 on cache0 and closid3/cbm=f on cache1
part4: L3:0=1;1=1f closid0/cbm=1 on cache0 and closid4/cbm=1f on cache1
part5: L3:0=1;1=3f closid0/cbm=1 on cache0 and closid5/cbm=3f on cache1
part6: L3:0=1;1=7f closid0/cbm=1 on cache0 and closid6/cbm=7f on cache1
part7: L3:0=1;1=ff closid0/cbm=1 on cache0 and closid7/cbm=ff on cache1
part8: L3:0=1;1=1ff closid0/cbm=1 on cache0 and closid8/cbm=1ff on cache1
part9: L3:0=1;1=3ff closid0/cbm=1 on cache0 and closid9/cbm=3ff on cache1
part10: L3:0=1;1=7ff closid0/cbm=1 on cache0 and closid10/cbm=7ff on cache1
part11: L3:0=1;1=fff closid0/cbm=1 on cache0 and closid11/cbm=fff on cache1
part12: L3:0=1;1=1fff closid0/cbm=1 on cache0 and closid12/cbm=1fff on cache1
part13: L3:0=1;1=3fff closid0/cbm=1 on cache0 and closid13/cbm=3fff on cache1
part14: L3:0=1;1=7fff closid0/cbm=1 on cache0 and closid14/cbm=7fff on cache1
part15: L3:0=1;1=ffff closid0/cbm=1 on cache0 and closid15/cbm=ffff on cache1
(closid 1 on cache0 combined with 16 different closid on cache1)
part16: L3:0=3;1=1 closid1/cbm=3 on cache0 and closid0/cbm=1 on cache1
part17: L3:0=3;1=3 closid1/cbm=3 on cache0 and closid1/cbm=3 on cache1
...
part31: L3:0=3;1=ffff closid1/cbm=3 on cache0 and closid15/cbm=ffff on cache1
(closid 2 on cache0 combined with 16 different closid on cache1)
part16: L3:0=7;1=1 closid2/cbm=3 on cache0 and closid0/cbm=1 on cache1
part17: L3:0=7;1=3 closid2/cbm=3 on cache0 and closid1/cbm=3 on cache1
...
part31: L3:0=7;1=ffff closid2/cbm=3 on cache0 and closid15/cbm=ffff on cache1
(closid 3 on cache0 combined with 16 different closids on cache1)
...
(closid 4 on cache0 combined with 16 different closids on cache1)
...
(closid 5 on cache0 combined with 16 different closids on cache1)
...
(closid 6 on cache0 combined with 16 different closids on cache1)
...
(closid 7 on cache0 combined with 16 different closids on cache1)
...
(closid 8 on cache0 combined with 16 different closids on cache1)
...
(closid 9 on cache0 combined with 16 different closids on cache1)
...
(closid 10 on cache0 combined with 16 different closids on cache1)
...
(closid 11 on cache0 combined with 16 different closids on cache1)
...
(closid 12 on cache0 combined with 16 different closids on cache1)
...
(closid 13 on cache0 combined with 16 different closids on cache1)
...
(closid 14 on cache0 combined with 16 different closids on cache1)
...
(closid 15 on cache0 combined with 16 different closids on cache1)
...
part254: L3:0=ffff;1=7fff closid15/cbm=ffff on cache0 and closid14/cbm=7fff on cache1
part255: L3:0=ffff;1=ffff closid15/cbm=ffff on cache0 and closid15/cbm=ffff on cache1

To utilize as much combinations as possbile, we may implement a
more complex allocation than current one.

Does this make sense?

Thanks.

-Fenghua