Re: [External] Re: [PATCH 2/2] sched/core: Avoid double calling update_rq_clock()

From: Vincent Guittot
Date: Tue May 09 2023 - 12:26:57 EST


On Fri, 5 May 2023 at 14:13, Hao Jia <jiahao.os@xxxxxxxxxxxxx> wrote:
>
>
>
> On 2023/5/5 Vincent Guittot wrote:
>
> >>>> rq_attach_root+0xc4/0xd0
> >>>> cpu_attach_domain+0x3dc/0x7f0
> >>>> partition_sched_domains_locked+0x2a5/0x3c0
> >>>> rebuild_sched_domains_locked+0x477/0x830
> >>>> rebuild_sched_domains+0x1b/0x30
> >>>> cpuset_hotplug_workfn+0x2ca/0xc90
> >>>> ? balance_push+0x56/0xf0
> >>>> ? _raw_spin_unlock+0x15/0x30
> >>>> ? finish_task_switch+0x98/0x2f0
> >>>> ? __switch_to+0x291/0x410
> >>>> ? __schedule+0x65e/0x1310
> >>>> process_one_work+0x1bc/0x3d0
> >>>> worker_thread+0x4c/0x380
> >>>> ? preempt_count_add+0x92/0xa0
> >>>> ? rescuer_thread+0x310/0x310
> >>>> kthread+0xe6/0x110
> >>>> ? kthread_complete_and_exit+0x20/0x20
> >>>> ret_from_fork+0x1f/0x30
> >>>>
> >>>> For the __balance_push_cpu_stop() case, we remove update_rq_clock() from
> >>>> the __migrate_task() function to avoid double updating the rq clock.
> >>>> And in order to avoid missing rq clock update, add update_rq_clock()
> >>>> call before migration_cpu_stop() calls __migrate_task().
> >
> > Can we do the opposite ?
> > AFAICT, update_rq_clock() in __balance_push_cpu_stop() is only there for
> > __migrate_task(). I prefer to keep the update_rq_clock() as close as possible
> > to the user
>
> I'm afraid not, the rq clock also needs to be updated before
> select_fallback_rq() is called.

yes you're right

>
>
> >
> >>>>
> >>>> This also works for unthrottle_cfs_rq(), so we also removed
> >>>> update_rq_clock() from the unthrottle_cfs_rq() function to avoid
> >>>> warnings caused by calling it multiple times, such as
> >>>> __cfsb_csd_unthrottle() and unthrottle_offline_cfs_rqs(). and
> >
> > This happens with the for loop added by
> > commit: 8ad075c2eb1f ("sched: Async unthrottling for cfs bandwidth")
> >
>
> Yes, the warning caused by this commit.
>
> >>>> in order to avoid missing rq clock update, we correspondingly add
> >>>> update_rq_clock() calls before unthrottle_cfs_rq() runs.
> >
> > These are special cases that happen because of the for_each.
> > As said above, I would prefer keeping update_rq_clock close the their user
> >
> > could we use something similar to rq_clock_skip_update() for those list ?
> >
>
> I try to do it with the method you provided. Some things maybe like this?
>
> We also need to clear RQCF_ACT_SKIP after calling rq_clock_loop_update()
> to avoid some warnings.
>
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index efdab1489113..f48b5d912d8c 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2420,7 +2420,6 @@ static struct rq *__migrate_task(struct rq *rq,
> struct rq_flags *rf,
> if (!is_cpu_allowed(p, dest_cpu))
> return rq;
>
> - update_rq_clock(rq);
> rq = move_queued_task(rq, rf, p, dest_cpu);
>
> return rq;
> @@ -2478,10 +2477,12 @@ static int migration_cpu_stop(void *data)
> goto out;
> }
>
> - if (task_on_rq_queued(p))
> + if (task_on_rq_queued(p)) {
> + update_rq_clock(rq);
> rq = __migrate_task(rq, &rf, p, arg->dest_cpu);
> - else
> + } else {
> p->wake_cpu = arg->dest_cpu;
> + }
>
> /*
> * XXX __migrate_task() can fail, at which point we
> might end
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 373ff5f55884..1dcef273bebe 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5575,6 +5575,13 @@ static void __cfsb_csd_unthrottle(void *arg)
> struct rq_flags rf;
>
> rq_lock(rq, &rf);
> + /*
> + * Iterating over the list can trigger several call to
> update_rq_clock()
> + * in unthrottle_cfs_rq().
> + * Do it once and skip the potential next ones.
> + */
> + update_rq_clock(rq);
> + rq_clock_loop_update(rq);
>
> /*
> * Since we hold rq lock we're safe from concurrent manipulation of
> @@ -5595,6 +5602,7 @@ static void __cfsb_csd_unthrottle(void *arg)
>
> rcu_read_unlock();
>
> + rq_clock_cancel_loop_update(rq);
> rq_unlock(rq, &rf);
> }
>
> @@ -6114,6 +6122,12 @@ static void __maybe_unused
> unthrottle_offline_cfs_rqs(struct rq *rq)
> struct task_group *tg;
>
> lockdep_assert_rq_held(rq);
> + /*
> + * The rq clock has already been updated before the
> + * set_rq_offline() runs, so we should skip updating
> + * the rq clock again in unthrottle_cfs_rq().
> + */
> + rq_clock_loop_update(rq);
>
> rcu_read_lock();
> list_for_each_entry_rcu(tg, &task_groups, list) {
> @@ -6137,6 +6151,7 @@ static void __maybe_unused
> unthrottle_offline_cfs_rqs(struct rq *rq)
> unthrottle_cfs_rq(cfs_rq);
> }
> rcu_read_unlock();
> + rq_clock_cancel_loop_update(rq);
> }
>
> #else /* CONFIG_CFS_BANDWIDTH */
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index ec7b3e0a2b20..3d4981d354a9 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1536,6 +1536,25 @@ static inline void rq_clock_skip_update(struct rq
> *rq)
> rq->clock_update_flags |= RQCF_REQ_SKIP;
> }
>
> +/*
> + * During cpu offlining and rq wide unthrottling, we can trigger
> + * an update_rq_clock() for several cfs and rt runqueues (Typically
> + * when using list_for_each_entry_*)
> + * rq_clock_loop_update() can be called after updating the clock once
> + * and before iterating over the list to prevent multiple update.
> + */
> +static inline void rq_clock_loop_update(struct rq *rq)

maybe use rq_clock_start_loop_update

> +{
> + lockdep_assert_rq_held(rq);
> + rq->clock_update_flags |= RQCF_ACT_SKIP;
> +}
> +
> +static inline void rq_clock_cancel_loop_update(struct rq *rq)

and rq_clock_stop_loop_update here

> +{
> + lockdep_assert_rq_held(rq);
> + rq->clock_update_flags &= ~RQCF_ACT_SKIP;
> +}
> +
> /*
> * See rt task throttling, which is the only time a skip