Re: [tip:sched/urgent] sched/clock: Fix clear_sched_clock_stable() preempt wobbly

From: Paul E. McKenney
Date: Thu Mar 23 2017 - 12:52:56 EST


On Thu, Mar 23, 2017 at 02:10:47AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID: 71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
> Gitweb: http://git.kernel.org/tip/71fdb70eb48784c1f28cdf2e67c4c587dd7f2594
> Author: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> AuthorDate: Mon, 13 Mar 2017 13:46:21 +0100
> Committer: Ingo Molnar <mingo@xxxxxxxxxx>
> CommitDate: Thu, 23 Mar 2017 07:31:48 +0100
>
> sched/clock: Fix clear_sched_clock_stable() preempt wobbly
>
> Paul reported a problems with clear_sched_clock_stable(). Since we run
> all of __clear_sched_clock_stable() from workqueue context, there's a
> preempt problem.
>
> Solve it by only running the static_key_disable() from workqueue.
>
> Reported-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Mike Galbraith <efault@xxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: fweisbec@xxxxxxxxx
> Link: http://lkml.kernel.org/r/20170313124621.GA3328@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>

For whatever it is worth given that it is already in -tip:

Tested-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>

> ---
> kernel/sched/clock.c | 17 ++++++++++++-----
> 1 file changed, 12 insertions(+), 5 deletions(-)
>
> diff --git a/kernel/sched/clock.c b/kernel/sched/clock.c
> index a08795e..fec0f58 100644
> --- a/kernel/sched/clock.c
> +++ b/kernel/sched/clock.c
> @@ -141,7 +141,14 @@ static void __set_sched_clock_stable(void)
> tick_dep_clear(TICK_DEP_BIT_CLOCK_UNSTABLE);
> }
>
> -static void __clear_sched_clock_stable(struct work_struct *work)
> +static void __sched_clock_work(struct work_struct *work)
> +{
> + static_branch_disable(&__sched_clock_stable);
> +}
> +
> +static DECLARE_WORK(sched_clock_work, __sched_clock_work);
> +
> +static void __clear_sched_clock_stable(void)
> {
> struct sched_clock_data *scd = this_scd();
>
> @@ -160,11 +167,11 @@ static void __clear_sched_clock_stable(struct work_struct *work)
> scd->tick_gtod, gtod_offset,
> scd->tick_raw, raw_offset);
>
> - static_branch_disable(&__sched_clock_stable);
> tick_dep_set(TICK_DEP_BIT_CLOCK_UNSTABLE);
> -}
>
> -static DECLARE_WORK(sched_clock_work, __clear_sched_clock_stable);
> + if (sched_clock_stable())
> + schedule_work(&sched_clock_work);
> +}
>
> void clear_sched_clock_stable(void)
> {
> @@ -173,7 +180,7 @@ void clear_sched_clock_stable(void)
> smp_mb(); /* matches sched_clock_init_late() */
>
> if (sched_clock_running == 2)
> - schedule_work(&sched_clock_work);
> + __clear_sched_clock_stable();
> }
>
> void sched_clock_init_late(void)
>