Re: [RFC PATCH 57/86] coccinelle: script to remove cond_resched()

From: Paul E. McKenney
Date: Tue Nov 21 2023 - 10:26:51 EST


On Mon, Nov 20, 2023 at 09:16:19PM -0800, Ankur Arora wrote:
>
> Paul E. McKenney <paulmck@xxxxxxxxxx> writes:
>
> > On Tue, Nov 07, 2023 at 03:07:53PM -0800, Ankur Arora wrote:
> >> Rudimentary script to remove the straight-forward subset of
> >> cond_resched() and allies:
> >>
> >> 1) if (need_resched())
> >> cond_resched()
> >>
> >> 2) expression*;
> >> cond_resched(); /* or in the reverse order */
> >>
> >> 3) if (expression)
> >> statement
> >> cond_resched(); /* or in the reverse order */
> >>
> >> The last two patterns depend on the control flow level to ensure
> >> that the complex cond_resched() patterns (ex. conditioned ones)
> >> are left alone and we only pick up ones which are only minimally
> >> related the neighbouring code.
> >
> > This series looks to get rid of stall warnings for long in-kernel
> > preempt-enabled code paths, which is of course a very good thing.
> > But removing all of the cond_resched() calls can actually increase
> > scheduling latency compared to the current CONFIG_PREEMPT_NONE=y state,
> > correct?
>
> Not necessarily.
>
> If TIF_NEED_RESCHED_LAZY is set, then we let the current task finish
> before preempting. If that task runs for arbitrarily long (what Thomas
> calls the hog problem) -- currently we allow them to run for upto one
> extra tick (which might shorten/become a tunable.)

Agreed, and that is the easy case. But getting rid of the cond_resched()
calls really can increase scheduling latency of this patchset compared
to status-quo mainline.

> If TIF_NEED_RESCHED is set, then it gets folded the same it does now
> and preemption happens at the next safe preemption point.
>
> So, I guess the scheduling latency would always be bounded but how much
> latency a task would incur would be scheduler policy dependent.
>
> This is early days, so the policy (or really the rest of it) isn't set
> in stone but having two levels of preemption -- immediate and
> deferred -- does seem to give the scheduler greater freedom of policy.

"Give the scheduler freedom!" is a wonderful slogan, but not necessarily
a useful one-size-fits-all design principle. The scheduler does not
and cannot know everything, after all.

> Btw, are you concerned about the scheduling latencies in general or the
> scheduling latency of a particular set of tasks?

There are a lot of workloads out there with a lot of objective functions
and constraints, but it is safe to say that both will be important, as
will other things, depending on the workload.

But you knew that already, right? ;-)

> > If so, it would be good to take a measured approach. For example, it
> > is clear that a loop that does a cond_resched() every (say) ten jiffies
> > can remove that cond_resched() without penalty, at least in kernels built
> > with either CONFIG_NO_HZ_FULL=n or CONFIG_PREEMPT=y. But this is not so
> > clear for a loop that does a cond_resched() every (say) ten microseconds.
>
> True. Though both of those loops sound bad :).

Yes, but do they sound bad enough to be useful in the real world? ;-)

> Yeah, and as we were discussing offlist, the question is the comparative
> density of preempt_dec_and_test() is true vs calls to cond_resched().
>
> And if they are similar then we could replace cond_resched() quiescence
> reporting with ones in preempt_enable() (as you mention elsewhere in the
> thread.)

Here is hoping that something like that can help.

I am quite happy with the thought of reducing the number of cond_resched()
invocations, but not at the expense of the Linux kernel failing to do
its job.

Thanx, Paul