Re: [PATCH 13/32] Documentation, x86: Documentation for Intel resource allocation user interface

From: Thomas Gleixner
Date: Wed Jul 13 2016 - 08:51:09 EST


On Tue, 12 Jul 2016, Fenghua Yu wrote:
> +3. Hierarchy in rscctrl
> +=======================

What means rscctrl?

You were not able to find a more cryptic acronym?

> +
> +The initial hierarchy of the rscctrl file system is as follows after mount:
> +
> +/sys/fs/rscctrl/info/info
> + /<resource0>/<resource0 specific info files>
> + /<resource1>/<resource1 specific info files>
> + ....
> + /tasks
> + /cpus
> + /schemas
> +
> +There are a few files and sub-directories in the hierarchy.

Shouldn't that read:

The following files and sub-directories are available:

> +3.1. info
> +---------

Those sub points want to be indented so it's clear where they belong to.

> +
> +The read-only sub-directory "info" in root directory has RDT related
> +system info.
> +
> +The "info" file under the info sub-directory shows general info of the system.
> +It shows shared domain and the resources within this domain.
> +
> +Each resource has its own info sub-directory. User can read the information
> +for allocation. For example, l3 directory has max_closid, max_cbm_len,
> +domain_to_cache_id.

Can you please restructure this so it's more obvious what you want to explain.

The "info" directory contains read-only system information:

3.1.1 info

The read-only file 'info' contains general information of the resource
control facility:

- Shared domains and the resources associated to those domains

3.1.2 resources

Each resource has its seperate sub directory, which contains resource
specific information.

3.1.2.1 L3 specific files

- max_closid: The maximum number of available closids
(explain closid ....)
- max_cbm_len: ....
- domain_to_cache_id: ....

So when you add L2 then you can add a proper description of the L2 related
files.

> +3.2. tasks
> +----------
> +
> +The file "tasks" has all task ids in the root directory initially.

This does not make sense.

The tasks file contains all thread ids which are associated to the root
resource partition. Initially all threads are associated to this.

Threads can be moved to other tasks files in resource partitions. A thread
can only be associated with a single resource partition.

> +thread ids in the file will be added or removed among sub-directories or
> +partitions. A task id only stays in one directory at the same time.

Is a task required to be associated to at least one 'tasks' file?

> +3.3. cpus
> +
> +The file "cpus" has a cpu mask that specifies the CPUs that are bound to the
> +schemas.

Please explain the concept of schemata (I prefer schemata as plural of schema,
but that's just my preference) before explaining what the cpumask in this file
means.

> +Any tasks scheduled on the cpus will use the schemas. User can set
> +both "cpus" and "tasks" to share the same schema in one directory. But when
> +a CPU is bound to a schema, a task running on the CPU uses this schema and
> +kernel will ignore scheam set up for the task in "tasks".

This does not make any sense.

When a task is bound to a schema then this should have preference over the
schema which is associated to the CPU. The CPU association is meant for tasks
which are not bound to a particular partition/schema.

So the initial setup should be:

- All CPUs are associated to the root resource partition

- No thread is associated to a particular resource partition

When a thread is added to a 'tasks' file of a partition then this partition
takes preference. If it's removed, i.e. the association to a partition is
undone, then the CPU association is used.

I have no idea why you think that all threads should be in a tasks file by
default. Associating CPUs in the first place makes a lot more sense as it
represents the topology of the system nicely.

> +Initial value is all zeros which means there is no CPU bound to the schemas
> +in the root directory and tasks use the schemas.

As I said above this is backwards.

> +3.4. schemas
> +------------
> +
> +The file "schemas" has default allocation masks/values for all resources on
> +each socket/cpu. Format of the file "schemas" is in multiple lines and each
> +line represents masks or values for one resource.

You really want to explain that the 'tasks', 'cpus' and 'schemata' files are
available on all levels of the resource hierarchy. The special case of the
files in the root partition is, that there default values are set when the
facility is initialized.

> +Format of one resource schema line is as follows:
> +
> +<resource name>:<resource id0>=<schema>;<resource id1>=<schema>;...

> +As one example, CAT L3's schema format is:

That's crap. You want a proper sub point explaining the L3 schema format and
not 'one example'.

3.4.1 L3 schema

L3 resource ids are the L3 domains, which are currently per socket.

The format for CBM only partitioning is:

L3:<cache_id0>=<cbm>;<cache_id1>=<cbm>;...

<cbm> is the cache allocation bitmask in hex

Example:

L3:0=ff;1=c0;

Explanation of example ....

For CBM and CDP paritioning the format is:

L3:<cache_id0>=<d_cbm>,<i_cbm>;<cache_id1>=<d_cbm>,<i_cbm>;...

Example:
....

> +If one resource is disabled, its line is not shown in schemas file.

That means:

Resources which are not described in a schemata file are disabled for
that particular partition.

Right?

Now that raises the question how this is supposed to work. Let's assume that
we have a partition 'foo' and thread X is in the tasks file of that
partition. The schema of that partition contains only an L2 entry. What's the
L3 association for thread X? Nothing at all?

> +The schema line can be expended for situations. L3 cbms format can be

You probably wanted to say extended, right?

> +4. Create and remove sub-directory
> +===================================

What is the meaning of a 'sub-directory'. I assume it's a resource
partition. So this chapter should be named so. The fact that the partition is
based on a directory is just an implementation detail.

> +User can create a sub-directory under the root directory by "mkdir" command.
> +User can remove the sub-directory by "rmdir" command.

User? Any user?

> +
> +Each sub-directory represents a resource allocation policy that user can
> +allocate resources for tasks or cpus.
> +
> +Each directory has three files "tasks", "cpus", and "schemas". The meaning
> +of each file is same as the files in the root directory.
> +
> +When a directory is created, initial contents of the files are:
> +
> +tasks: Empty. This means no task currently uses this allocation schemas.
> +cpus: All zeros. This means no CPU uses this allocation schemas.
> +schemas: All ones. This means all resources can be used in this allocation.

> +5. Add/remove a task in a partition
> +===================================
> +
> +User can add/remove a task by writing its PID in "tasks" in a partition.
> +User can read PIDs stored in one "tasks" file.
> +
> +One task PID only exists in one partition/directory at the same time. If PID
> +is written in a new directory, it's removed automatically from its last
> +directory.

Please use partition consistently. Aside of that this belongs to the
description of the 'tasks' file.

> +
> +6. Add/remove a CPU in a partition
> +==================================
> +
> +User can add/remove a CPU by writing its bit in "cpus" in a partition.
> +User can read CPUs stored in one "cpus" file.

Any (l)user ?

> +One CPU only exists in one partition/directory if user wants it to be bound
> +to any "schemas". Kernel guarantees uniqueness of the CPU in the whole
> +directory to make sure it only uses one schemas. If a CPU is written in one

^^^^^^^^^
You mean hierarchy here, right?

> +new directory, it's automatically removed from its original directory if it
> +exists in the original directory.

Please use partition not directory.

Thanks,

tglx