Re: [tip:sched/core] sched/fair: Limit sched_cfs_period_timer loop to avoid hard lockup

From: Phil Auld
Date: Tue Apr 09 2019 - 08:48:25 EST


Hi Ingo, Peter,

On Wed, Apr 03, 2019 at 01:38:39AM -0700 tip-bot for Phil Auld wrote:
> Commit-ID: 06ec5d30e8d57b820d44df6340dcb25010d6d0fa
> Gitweb: https://git.kernel.org/tip/06ec5d30e8d57b820d44df6340dcb25010d6d0fa
> Author: Phil Auld <pauld@xxxxxxxxxx>
> AuthorDate: Tue, 19 Mar 2019 09:00:05 -0400
> Committer: Ingo Molnar <mingo@xxxxxxxxxx>
> CommitDate: Wed, 3 Apr 2019 09:50:23 +0200

This commit seems to have gotten lost. It's not in tip and now the
direct gitweb link is also showing bad commit reference.

Did this fall victim to a reset or something?


Thanks,

Phil


>
> sched/fair: Limit sched_cfs_period_timer loop to avoid hard lockup
>
> With extremely short cfs_period_us setting on a parent task group with a large
> number of children the for loop in sched_cfs_period_timer can run until the
> watchdog fires. There is no guarantee that the call to hrtimer_forward_now()
> will ever return 0. The large number of children can make
> do_sched_cfs_period_timer() take longer than the period.
>
> NMI watchdog: Watchdog detected hard LOCKUP on cpu 24
> RIP: 0010:tg_nop+0x0/0x10
> <IRQ>
> walk_tg_tree_from+0x29/0xb0
> unthrottle_cfs_rq+0xe0/0x1a0
> distribute_cfs_runtime+0xd3/0xf0
> sched_cfs_period_timer+0xcb/0x160
> ? sched_cfs_slack_timer+0xd0/0xd0
> __hrtimer_run_queues+0xfb/0x270
> hrtimer_interrupt+0x122/0x270
> smp_apic_timer_interrupt+0x6a/0x140
> apic_timer_interrupt+0xf/0x20
> </IRQ>
>
> To prevent this we add protection to the loop that detects when the loop has run
> too many times and scales the period and quota up, proportionally, so that the timer
> can complete before then next period expires. This preserves the relative runtime
> quota while preventing the hard lockup.
>
> A warning is issued reporting this state and the new values.
>
> Signed-off-by: Phil Auld <pauld@xxxxxxxxxx>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> Cc: Anton Blanchard <anton@xxxxxxxxxx>
> Cc: Ben Segall <bsegall@xxxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Mike Galbraith <efault@xxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: <stable@xxxxxxxxxxxxxxx>
> Link: https://lkml.kernel.org/r/20190319130005.25492-1-pauld@xxxxxxxxxx
> Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
> ---
> kernel/sched/fair.c | 25 +++++++++++++++++++++++++
> 1 file changed, 25 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 40bd1e27b1b7..d4cce633eac8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4885,6 +4885,8 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
> return HRTIMER_NORESTART;
> }
>
> +extern const u64 max_cfs_quota_period;
> +
> static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> {
> struct cfs_bandwidth *cfs_b =
> @@ -4892,6 +4894,7 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> unsigned long flags;
> int overrun;
> int idle = 0;
> + int count = 0;
>
> raw_spin_lock_irqsave(&cfs_b->lock, flags);
> for (;;) {
> @@ -4899,6 +4902,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
> if (!overrun)
> break;
>
> + if (++count > 3) {
> + u64 new, old = ktime_to_ns(cfs_b->period);
> +
> + new = (old * 147) / 128; /* ~115% */
> + new = min(new, max_cfs_quota_period);
> +
> + cfs_b->period = ns_to_ktime(new);
> +
> + /* since max is 1s, this is limited to 1e9^2, which fits in u64 */
> + cfs_b->quota *= new;
> + cfs_b->quota /= old;
> +
> + pr_warn_ratelimited(
> + "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
> + smp_processor_id(),
> + new/NSEC_PER_USEC,
> + cfs_b->quota/NSEC_PER_USEC);
> +
> + /* reset count so we don't come right back in here */
> + count = 0;
> + }
> +
> idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
> }
> if (idle)

--