Re: [PATCH 3/3] sched: Update ->next_balance correctly during newidle balance

From: Ingo Molnar
Date: Fri Oct 20 2023 - 03:53:59 EST



* Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> wrote:

> From: "Vineeth Pillai (Google)" <vineeth@xxxxxxxxxxxxxxx>
>
> When newidle balancing triggers, we see that it constantly clobbers
> rq->next_balance even when there is no newidle balance happening due to
> the cost estimates. Due to this, we see that periodic load balance
> (rebalance_domains) may trigger way more often when the CPU is going in
> and out of idle at a high rate but is no really idle. Repeatedly
> triggering load balance there is a bad idea as it is a heavy operation.
> It also causes increases in softirq.
>
> Another issue is ->last_balance is not updated after newidle balance
> causing mistakes in the ->next_balance calculations.
>
> Fix by updating last_balance when a newidle load balance actually happens
> and then updating next_balance. This is also how it is done in other load
> balance paths.
>
> Testing shows a significant drop in softirqs when running:
> cyclictest -i 100 -d 100 --latency=1000 -D 5 -t -m -q
>
> Goes from ~6k to ~800.
>
> Cc: Suleiman Souhlal <suleiman@xxxxxxxxxx>
> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
> Cc: Frederic Weisbecker <frederic@xxxxxxxxxx>
> Cc: Paul E. McKenney <paulmck@xxxxxxxxxx>
> Signed-off-by: Vineeth Pillai (Google) <vineeth@xxxxxxxxxxxxxxx>
> Co-developed-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> ---
> kernel/sched/fair.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8e276d12c3cb..b147ad09126a 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -12076,11 +12076,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
>
> if (!READ_ONCE(this_rq->rd->overload) ||
> (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {
> -
> - if (sd)
> - update_next_balance(sd, &next_balance);
> rcu_read_unlock();
> -
> goto out;
> }
> rcu_read_unlock();
> @@ -12095,8 +12091,6 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> int continue_balancing = 1;
> u64 domain_cost;
>
> - update_next_balance(sd, &next_balance);
> -
> if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost)
> break;
>
> @@ -12109,6 +12103,8 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> t1 = sched_clock_cpu(this_cpu);
> domain_cost = t1 - t0;
> update_newidle_cost(sd, domain_cost);
> + sd->last_balance = jiffies;
> + update_next_balance(sd, &next_balance);
>
> curr_cost += domain_cost;
> t0 = t1;

Okay, I'm applying patches #2 and #3, without #1: it should be safe
out of order, but let me know if I missed something ...

Thanks,

Ingo