Re: [PATCH] sched/cfs: fix spurious active migration

From: Vincent Guittot
Date: Mon Dec 02 2019 - 10:36:56 EST


On Mon, 2 Dec 2019 at 16:13, Dietmar Eggemann <dietmar.eggemann@xxxxxxx> wrote:
>
> On 29/11/2019 15:04, Vincent Guittot wrote:
> > The load balance can fail to find a suitable task during the periodic check
> > because the imbalance is smaller than half of the load of the waiting
> > tasks. This results in the increase of the number of failed load balance,
> > which can end up to start an active migration. This active migration is
> > useless because the current running task is not a better choice than the
> > waiting ones. In fact, the current task was probably not running but
> > waiting for the CPU during one of the previous attempts and it had already
> > not been selected.
> >
> > When load balance fails too many times to migrate a task, we should relax
> > the contraint on the maximum load of the tasks that can be migrated
> > similarly to what is done with cache hotness.
> >
> > Before the rework, load balance used to set the imbalance to the average
> > load_per_task in order to mitigate such situation. This increased the
> > likelihood of migrating a task but also of selecting a larger task than
> > needed while more appropriate ones were in the list.
>
> Why not use '&& !env->sd->nr_balance_failed' then? Too aggressive? But

Yeah I have thought and tried !env->sd->nr_balance_failed. In fact, I
have tried both and I haven't seen any visible difference in my tests.
Nevertheless, using !env->sd->nr_balance_failed condition doesn't let
a chance to the current running task with a possibly correct load to
become a waiting task during the next load balance and to be selected
instead of a larger task, which will create another imbalance. Also
cache_nice_tries increase the number of trials to large domain span to
select different rqs before migrating a large task. Then,
env->sd->nr_balance_failed <= env->sd->cache_nice_tries is used in
can_migrate() for preventing migration because of cache_hotness so we
are aligned with this condition. At the opposite,
!env->sd->nr_balance_failed is used with LB_MIN which is disable by
default.

TBH, I don't have a strong opinion between !env->sd->nr_balance_failed
and env->sd->nr_balance_failed <= env->sd->cache_nice_tries.


> the average load_per_task was calculated at each load balance attempt,
> so it would have led to a migration at the first load balance.
>
> This would be in sync with the LB_MIN check in the same switch case
> (migrate_load). Although LB_MIN is false by default.
>
> > Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> > ---
> >
> > I haven't seen any noticable performance changes on the benchmarks that I
> > usually run but the problem can be easily highlight with a simple test
> > with 9 always running tasks on 8 cores.
> >
> > kernel/sched/fair.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e0d662a..d1b4fa7 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7433,7 +7433,14 @@ static int detach_tasks(struct lb_env *env)
> > load < 16 && !env->sd->nr_balance_failed)
> > goto next;
> >
> > - if (load/2 > env->imbalance)
> > + /*
> > + * Make sure that we don't migrate too much load.
> > + * Nevertheless, let relax the constraint if
> > + * scheduler fails to find a good waiting task to
> > + * migrate.
> > + */
> > + if (load/2 > env->imbalance &&
> > + env->sd->nr_balance_failed <= env->sd->cache_nice_tries)
> > goto next;
> >
> > env->imbalance -= load;
> >