Re: [PATCH v4 2/3] sched/fair: Clean up active balance nr_balance_failed trickery

From: Vincent Guittot
Date: Fri Apr 02 2021 - 08:57:37 EST


On Thu, 1 Apr 2021 at 21:30, Valentin Schneider
<valentin.schneider@xxxxxxx> wrote:
>
> When triggering an active load balance, sd->nr_balance_failed is set to
> such a value that any further can_migrate_task() using said sd will ignore
> the output of task_hot().
>
> This behaviour makes sense, as active load balance intentionally preempts a
> rq's running task to migrate it right away, but this asynchronous write is
> a bit shoddy, as the stopper thread might run active_load_balance_cpu_stop
> before the sd->nr_balance_failed write either becomes visible to the
> stopper's CPU or even happens on the CPU that appended the stopper work.
>
> Add a struct lb_env flag to denote active balancing, and use it in
> can_migrate_task(). Remove the sd->nr_balance_failed write that served the
> same purpose. Cleanup the LBF_DST_PINNED active balance special case.
>
> Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>

Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>

> ---
> kernel/sched/fair.c | 31 +++++++++++++++----------------
> 1 file changed, 15 insertions(+), 16 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 04d5e14fa261..d8077f82a380 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7422,6 +7422,7 @@ enum migration_type {
> #define LBF_NEED_BREAK 0x02
> #define LBF_DST_PINNED 0x04
> #define LBF_SOME_PINNED 0x08
> +#define LBF_ACTIVE_LB 0x10
>
> struct lb_env {
> struct sched_domain *sd;
> @@ -7583,10 +7584,13 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
> * our sched_group. We may want to revisit it if we couldn't
> * meet load balance goals by pulling other tasks on src_cpu.
> *
> - * Avoid computing new_dst_cpu for NEWLY_IDLE or if we have
> - * already computed one in current iteration.
> + * Avoid computing new_dst_cpu
> + * - for NEWLY_IDLE
> + * - if we have already computed one in current iteration
> + * - if it's an active balance
> */
> - if (env->idle == CPU_NEWLY_IDLE || (env->flags & LBF_DST_PINNED))
> + if (env->idle == CPU_NEWLY_IDLE ||
> + env->flags & (LBF_DST_PINNED | LBF_ACTIVE_LB))
> return 0;
>
> /* Prevent to re-select dst_cpu via env's CPUs: */
> @@ -7611,10 +7615,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
>
> /*
> * Aggressive migration if:
> - * 1) destination numa is preferred
> - * 2) task is cache cold, or
> - * 3) too many balance attempts have failed.
> + * 1) active balance
> + * 2) destination numa is preferred
> + * 3) task is cache cold, or
> + * 4) too many balance attempts have failed.
> */
> + if (env->flags & LBF_ACTIVE_LB)
> + return 1;
> +
> tsk_cache_hot = migrate_degrades_locality(p, env);
> if (tsk_cache_hot == -1)
> tsk_cache_hot = task_hot(p, env);
> @@ -9805,9 +9813,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> active_load_balance_cpu_stop, busiest,
> &busiest->active_balance_work);
> }
> -
> - /* We've kicked active balancing, force task migration. */
> - sd->nr_balance_failed = sd->cache_nice_tries+1;
> }
> } else {
> sd->nr_balance_failed = 0;
> @@ -9957,13 +9962,7 @@ static int active_load_balance_cpu_stop(void *data)
> .src_cpu = busiest_rq->cpu,
> .src_rq = busiest_rq,
> .idle = CPU_IDLE,
> - /*
> - * can_migrate_task() doesn't need to compute new_dst_cpu
> - * for active balancing. Since we have CPU_IDLE, but no
> - * @dst_grpmask we need to make that test go away with lying
> - * about DST_PINNED.
> - */
> - .flags = LBF_DST_PINNED,
> + .flags = LBF_ACTIVE_LB,
> };
>
> schedstat_inc(sd->alb_count);
> --
> 2.25.1
>