Re: [PATCH 3/3] sched: Update ->next_balance correctly during newidle balance

From: Joel Fernandes
Date: Thu Nov 09 2023 - 07:31:52 EST


On Thu, Nov 9, 2023 at 5:02 AM Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> wrote:
[...]
> > > things worse for power on ARM where you have uclamp stuff happening in the
> > > load balance paths which is quite heavy when I last traced that..
> > >
> > > Further, we have observed in our tracing on real device that the update of
> > > rq->next_balance from the newidle path is itself buggy... we observed that
> > > because newidle balance may not update rq->last_balance, it is possible that
> > > rq->next_balance when updated by update_next_balance() will be updated to a
> > > value that is in the past and it will be stuck there for a long time! Perhaps
> > > we should investigate more and fix that bug separately. Vineeth could provide
> > > more details on the "getting stuck in the past" behavior as well.
> >
> > sd->last_balance reflects last time an idle/busy load_balance happened
> > (newly idle is out of the scope for the points that I mentioned
> > previously). So if no load balance happens for a while, the
> > rq->next_balance can be in the past but I don't see a problem here. It
> > just means that a load balance hasn't happened for a while. It can
> > even move backward if it has been set when busy but the cpu is now
> > idle
>
> Sure, but I think it should at least set it by get_sd_balance_interval() into
> the future. Like so (untested)? Let me know what you think and thanks!

Btw, I also drew a graph showing the issue without patch:
https://i.imgur.com/RgTr45l.png

Each "x" mark is run_rebalance_domains() running on a CPU. As can be
seen, there were some 10 occurrences in a span of 15ms in one
instance.

Thanks,

- Joel


> ---8<-----------------------
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a3318aeff9e8..0d6667d31c51 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11314,6 +11314,30 @@ get_sd_balance_interval(struct sched_domain *sd, int cpu_busy)
> return interval;
> }
>
> +/*
> + * Update the next balance from newidle balance.
> + * The update of next_balance from newidle balance tries to make sure that
> + * we don't trigger periodic balance too far in the future on a now-idle
> + * system. This is just like update_next_balance except that since
> + * sd->last_balance may not have been updated for a while, we're careful to
> + * not set next_balance in the past.
> + */
> +static inline void
> +update_next_balance_newidle(struct sched_domain *sd, unsigned long *next_balance)
> +{
> + unsigned long interval, next;
> +
> + /* used by new idle balance, so cpu_busy = 0 */
> + interval = get_sd_balance_interval(sd, 0);
> + next = sd->last_balance + interval;
> +
> + next = max(next, jiffies + interval);
> +
> + if (time_after(*next_balance, next)) {
> + *next_balance = next;
> + }
> +}
> +
> static inline void
> update_next_balance(struct sched_domain *sd, unsigned long *next_balance)
> {
> @@ -12107,7 +12131,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {
>
> if (sd)
> - update_next_balance(sd, &next_balance);
> + update_next_balance_newidle(sd, &next_balance);
> rcu_read_unlock();
>
> goto out;
> @@ -12124,7 +12148,7 @@ static int newidle_balance(struct rq *this_rq, struct rq_flags *rf)
> int continue_balancing = 1;
> u64 domain_cost;
>
> - update_next_balance(sd, &next_balance);
> + update_next_balance_newidle(sd, &next_balance);
>
> if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost)
> break;