Re: [PATCH] sched: avoid scale real weight down to zero

From: çè
Date: Mon Mar 16 2020 - 02:33:42 EST


Hi, Peter

I've done more complicated testing with share 2 and every thing looks fine.

Should we apply this? or is there any concern?

Regards,
Michael Wang

On 2020/3/5 äå10:57, çè wrote:
> During our testing, we found a case that shares no longer
> working correctly, the cgroup topology is like:
>
> /sys/fs/cgroup/cpu/A (shares=102400)
> /sys/fs/cgroup/cpu/A/B (shares=2)
> /sys/fs/cgroup/cpu/A/B/C (shares=1024)
>
> /sys/fs/cgroup/cpu/D (shares=1024)
> /sys/fs/cgroup/cpu/D/E (shares=1024)
> /sys/fs/cgroup/cpu/D/E/F (shares=1024)
>
> The same benchmark is running in group C & F, no other tasks are
> running, the benchmark is capable to consumed all the CPUs.
>
> We suppose the group C will win more CPU resources since it could
> enjoy all the shares of group A, but it's F who wins much more.
>
> The reason is because we have group B with shares as 2, since
> A->cfs_rq.load.weight == B->se.load.weight == B->shares/nr_cpus,
> so A->cfs_rq.load.weight become very small.
>
> And in calc_group_shares() we calculate shares as:
>
> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
> shares = (tg_shares * load) / tg_weight;
>
> Since the 'cfs_rq->load.weight' is too small, the load become 0
> after scale down, although 'tg_shares' is 102400, shares of the se
> which stand for group A on root cfs_rq become 2.
>
> While the se of D on root cfs_rq is far more bigger than 2, so it
> wins the battle.
>
> Thus when scale_load_down() scale real weight down to 0, it's no
> longer telling the real story, the caller will have the wrong
> information and the calculation will be buggy.
>
> This patch add check in scale_load_down(), so the real weight will
> be >= MIN_SHARES after scale, after applied the group C wins as
> expected.
>
> Cc: Ben Segall <bsegall@xxxxxxxxxx>
> Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> Suggested-by: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Signed-off-by: Michael Wang <yun.wang@xxxxxxxxxxxxxxxxx>
> ---
> kernel/sched/sched.h | 8 +++++++-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 2a0caf394dd4..75c283f22256 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
> #ifdef CONFIG_64BIT
> # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
> # define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
> -# define scale_load_down(w) ((w) >> SCHED_FIXEDPOINT_SHIFT)
> +# define scale_load_down(w) \
> +({ \
> + unsigned long __w = (w); \
> + if (__w) \
> + __w = max(MIN_SHARES, __w >> SCHED_FIXEDPOINT_SHIFT); \
> + __w; \
> +})
> #else
> # define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
> # define scale_load(w) (w)
>