Re: [PATCH 4/6] sched/fair: Skip prefer sibling move between SMT group and non-SMT group

From: Tim Chen
Date: Tue May 09 2023 - 19:36:01 EST


On Tue, 2023-05-09 at 15:36 +0200, Vincent Guittot wrote:
> On Thu, 4 May 2023 at 18:11, Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote:
> >
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 8a325db34b02..58ef7d529731 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -10411,8 +10411,12 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
> > /*
> > * Try to move all excess tasks to a sibling domain of the busiest
> > * group's child domain.
> > + *
> > + * Do not try to move between non smt sched group and smt sched
> > + * group. Let asym active balance properly handle that case.
> > */
> > if (sds.prefer_sibling && local->group_type == group_has_spare &&
> > + !asymmetric_groups(sds.busiest, sds.local) &&
>
> Can't you delete SD_PREFER_SIBLING flags when building topology like
> SD_ASYM_CPUCAPACITY does ?

The sched domain actually can have a mixture of sched groups with Atom modules
and sched groups with SMT cores. When comparing sched group of Atom core cluster
and Atom core cluster, or SMT core with SMT core, I think we do want the prefer sibling logic.
It is only when we are comparing SMT core and Atom core cluster we
want to skip this. Ricardo, please correct me if I am wrong.

>
> Generally speaking SD_ASYM_CPUCAPACITY and SD_ASYM_PACKING are doing
> quite similar thing, it would be good to get one common solution
> instead 2 parallel paths

Okay. I'll see what I can do to merge the handling.

Tim


>
> > busiest->sum_nr_running > local->sum_nr_running + 1)
> > goto force_balance;
> >
> > --
> > 2.32.0
> >