Re: [PATCH 1/2] cgroup/cpuset: Keep current cpus list if cpus affinity was explicitly set

From: Waiman Long
Date: Thu Jul 28 2022 - 10:59:16 EST


On 7/28/22 10:44, Michal Koutný wrote:
Hello.

On Wed, Jul 27, 2022 at 08:58:14PM -0400, Waiman Long <longman@xxxxxxxxxx> wrote:
It was found that any change to the current cpuset hierarchy may reset
the cpus_allowed list of the tasks in the affected cpusets to the
default cpuset value even if those tasks have cpus affinity explicitly
set by the users before.
I'm surprised this went so long unnoticed / unreported.

Could it be users relied on that implicit affinity reset?

As said, it is more easily triggered in a cgroup v2 environment. Systemd, on a cgroup v2 environment, will write "+cpuset" to the root cgroup's subtree_control file when a new container is instantiated. I don't know why it is doing that, but that cause problem as it resets all the cpus_allowed list assignment. Cgroup v1 doesn't have this problem.


That is especially easy to trigger under a cgroup v2 environment where
writing "+cpuset" to the root cgroup's cgroup.subtree_control file
will reset the cpus affinity of all the processes in the system.
This should apply only to tasks that were extracted out of the root
cgroup, no? (OK, those are all processes practically.)
The reset is done on all cgroups in a particular subtree. In the case of cgroup root, it is all the processes in the system.

(Even without your second patch, the scope should be limited because of
src_cset==dst_cset check in cgroup_migrate_prepare_dst().)

That is especially problematic in a nohz_full environment where the
tasks running in the nohz_full CPUs usually have their cpus affinity
explicitly set and will behave incorrectly if cpus affinity changes.
One could also argue that for such processes, cgroup hierarchy should be
first configured and only then they start and set own affinity.

Fix this problem by adding a flag in the task structure to indicate that
a task has their cpus affinity explicitly set before and make cpuset
code not to change their cpus_allowed list unless the user chosen cpu
list is no longer a subset of the cpus_allowed list of the cpuset itself.
I'm uneasy with the occasional revert of this flag, i.e. the task who
set their affinity would sometimes have it overwritten and sometimes
not (which might have been relied on, especially with writes into
cpuset.cpus).
(But I have no better answer than the counter-argument above since
there's no easier way to detect the implicit migrations.)
I also thought about that. Another possible alternative is to use the intersection of cpuset's cpu list and task's own cpu list as long as it is not empty. Reducing the number of cpus assigned to a task may produce some unexpected behavior too.

Also, is there similar effect with memory binding?

I think so, but memory binding is less frequently used and its effect is less noticeable.

Cheers,
Longman


Thanks,
Michal