Re: [PATCH 4/4] sched/numa: Do not move imbalanced load purely on the basis of an idle CPU

From: Mel Gorman
Date: Mon Sep 10 2018 - 05:41:53 EST


On Fri, Sep 07, 2018 at 01:37:39PM +0100, Mel Gorman wrote:
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index d59d3e00a480..d4c289c11012 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -1560,7 +1560,7 @@ static bool task_numa_compare(struct task_numa_env *env,
> > > goto unlock;
> > >
> > > if (!cur) {
> > > - if (maymove || imp > env->best_imp)
> > > + if (maymove)
> > > goto assign;
> > > else
> > > goto unlock;
> >
> > Srikar's patch here:
> >
> > http://lkml.kernel.org/r/1533276841-16341-4-git-send-email-srikar@xxxxxxxxxxxxxxxxxx
> >
> > Also frobs this condition, but in a less radical way. Does that yield
> > similar results?
>
> I can check. I do wonder of course if the less radical approach just means
> that automatic NUMA balancing and the load balancer simply disagree about
> placement at a different time. It'll take a few days to have an answer as
> the battery of workloads to check this take ages.
>

Tests completed over the weekend and I've found that the performance of
both patches are very similar for two machines (both 2 socket) running a
variety of workloads. Hence, I'm not worried about which patch gets picked
up. However, I would prefer my own on the grounds that the additional
complexity does not appear to get us anything. Of course, that changes if
Srikar's tests on his larger ppc64 machines show the more complex approach
is justified.

--
Mel Gorman
SUSE Labs