Re: [PATCH 4/6] sched/fair: Skip prefer sibling move between SMT group and non-SMT group

From: Peter Zijlstra
Date: Fri May 05 2023 - 19:39:40 EST


On Fri, May 05, 2023 at 04:07:39PM -0700, Tim Chen wrote:
> On Fri, 2023-05-05 at 15:22 +0200, Peter Zijlstra wrote:
> > On Thu, May 04, 2023 at 09:09:54AM -0700, Tim Chen wrote:
> > > From: Tim C Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > >
> > > Do not try to move tasks between non SMT sched group and SMT sched
> > > group for "prefer sibling" load balance.
> > > Let asym_active_balance_busiest() handle that case properly.
> > > Otherwise we could get task bouncing back and forth between
> > > the SMT sched group and non SMT sched group.
> > >
> > > Reviewed-by: Ricardo Neri <ricardo.neri-calderon@xxxxxxxxxxxxxxx>
> > > Signed-off-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > > ---
> > > kernel/sched/fair.c | 4 ++++
> > > 1 file changed, 4 insertions(+)
> > >
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index 8a325db34b02..58ef7d529731 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -10411,8 +10411,12 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
> > > /*
> > > * Try to move all excess tasks to a sibling domain of the busiest
> > > * group's child domain.
> > > + *
> > > + * Do not try to move between non smt sched group and smt sched
> > > + * group. Let asym active balance properly handle that case.
> > > */
> > > if (sds.prefer_sibling && local->group_type == group_has_spare &&
> > > + !asymmetric_groups(sds.busiest, sds.local) &&
> > > busiest->sum_nr_running > local->sum_nr_running + 1)
> > > goto force_balance;
> >
> > This seems to have the hidden assumption that a !SMT core is somehow
> > 'less' that an SMT code. Should this not also look at
> > sched_asym_prefer() to establush this is so?
> >
> > I mean, imagine I have a regular system and just offline one smt sibling
> > for giggles.
>
> I don't quite follow your point as asymmetric_groups() returns false even
> one smt sibling is offlined.
>
> Even say sds.busiest has 1 SMT and sds.local has 2 SMT, both sched groups still
> have SD_SHARE_CPUCAPACITY flag turned on. So asymmetric_groups() return
> false and the load balancing logic is not changed for regular non-hybrid system.
>
> I may be missing something.

What's the difference between the two cases? That is, if the remaining
sibling will have SD_SHARE_CPUCAPACIY from the degenerate SMT domain
that's been reaped, then why doesn't the same thing apply to the atoms
in the hybrid muck?

Those two cases *should* be identical, both cases you have cores with
and cores without SMT.