Re: [PATCH v2] sched/fair: Use all little CPUs for CPU-bound workload

From: Vincent Guittot
Date: Fri Nov 24 2023 - 10:56:23 EST


On Fri, 24 Nov 2023 at 16:33, Pierre Gondois <pierre.gondois@xxxxxxx> wrote:
>
> Running n CPU-bound tasks on an n CPUs platform:
> - with asymmetric CPU capacity
> - not having SD_SHARE_PKG_RESOURCES flag set at the DIE
> sched domain level (i.e. not DynamIQ systems)

Nit: SD_SHARE_PKG_RESOURCES is never set at the DIE level. In case of
DynamIQ systems, all CPUs are in the same MC level which has
SD_SHARE_PKG_RESOURCES flag

> might result in a task placement where two tasks run on a big CPU
> and none on a little CPU. This placement could be more optimal by
> using all CPUs.
>
> Testing platform:
> Juno-r2:
> - 2 big CPUs (1-2), maximum capacity of 1024
> - 4 little CPUs (0,3-5), maximum capacity of 383
>
> Testing workload ([1]):
> Spawn 6 CPU-bound tasks. During the first 100ms (step 1), each tasks
> is affine to a CPU, except for:
> - one little CPU which is left idle.
> - one big CPU which has 2 tasks affine.
> After the 100ms (step 2), remove the cpumask affinity.
>
> Before patch:
> During step 2, the load balancer running from the idle CPU tags sched
> domains as:
> - little CPUs: 'group_has_spare'. Indeed, 3 CPU-bound tasks run on a
> 4 CPUs sched-domain, and the idle CPU provides enough spare
> capacity.
> - big CPUs: 'group_overloaded'. Indeed, 3 tasks run on a 2 CPUs
> sched-domain, so the following path is used:
> group_is_overloaded()
> \-if (sgs->sum_nr_running <= sgs->group_weight) return true;
>
> The following path which would change the migration type to
> 'migrate_task' is not taken:
> calculate_imbalance()
> \-if (env->idle != CPU_NOT_IDLE && env->imbalance == 0)
> as the local group has some spare capacity, so the imbalance
> is not 0.
>
> The migration type requested is 'migrate_util' and the busiest
> runqueue is the big CPU's runqueue having 2 tasks (each having a
> utilization of 512). The idle little CPU cannot pull one of these
> task as its capacity is too small for the task. The following path
> is used:
> detach_tasks()
> \-case migrate_util:
> \-if (util > env->imbalance) goto next;
>
> After patch:
> As the number of failed balancing attempts grows (with
> 'nr_balance_failed'), progressively make it easier to migrate
> a big task to the idling little CPU. A similar mechanism is
> used for the 'migrate_load' migration type.
>
> Improvement:
> Running the testing workload [1] with the step 2 representing
> a ~10s load for a big CPU:
> Before patch: ~19.3s
> After patch: ~18s (-6.7%)
>
> Similar issue reported at:
> https://lore.kernel.org/lkml/20230716014125.139577-1-qyousef@xxxxxxxxxxx/
>
> v1:
> https://lore.kernel.org/all/20231110125902.2152380-1-pierre.gondois@xxxxxxx/
>
> Suggested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> Signed-off-by: Pierre Gondois <pierre.gondois@xxxxxxx>

Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>

> ---
> kernel/sched/fair.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index df348aa55d3c..53c18fd23ae7 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8907,7 +8907,7 @@ static int detach_tasks(struct lb_env *env)
> case migrate_util:
> util = task_util_est(p);
>
> - if (util > env->imbalance)
> + if (shr_bound(util, env->sd->nr_balance_failed) > env->imbalance)
> goto next;
>
> env->imbalance -= util;
> --
> 2.25.1
>