Re: [RFC 2/3] rcu: Simplify rcu_note_context_switch exit from critical section

From: Paul E. McKenney
Date: Mon Jul 01 2019 - 17:42:50 EST


On Mon, Jul 01, 2019 at 05:33:28PM -0400, Joel Fernandes wrote:
> On Mon, Jul 01, 2019 at 01:03:10PM -0700, Paul E. McKenney wrote:
> > On Mon, Jul 01, 2019 at 12:04:14AM -0400, Joel Fernandes (Google) wrote:
> > > The rcu_preempt_note_context_switch() tries to handle cases where
> > > __rcu_read_unlock() got preempted and then the context switch path does
> > > the reporting of the quiscent state along with clearing any bits in the
> > > rcu_read_unlock_special union.
> > >
> > > This can be handled by just calling rcu_deferred_qs() which was added
> > > during the RCU consolidation work and already does these checks.
> > >
> > > Tested RCU config TREE03 for an hour which succeeds.
> > >
> > > Cc: rcu@xxxxxxxxxxxxxxx
> > > Cc: kernel-team@xxxxxxxxxxx
> > > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> >
> > My first reaction was "that cannot possibly work", but after a bit of
> > digging, it really does appear to work just fine. I therefore expanded
> > the commit log a bit, so please check it to catch any messups on my part.
> >
> > Very cool, thank you very much! ;-)
>
> Awesome! Thanks. I am glad you agree with the change and I agree with your
> changes to the commit log.

Very good, I will push it to -rcu shortly.

Thanx, Paul

> thanks,
>
> - Joel
>
>
> >
> > Thanx, Paul
> >
> > ------------------------------------------------------------------------
> >
> > commit ce547cb41ed7662f70d0b07d4c7f7555ba130c61
> > Author: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> > Date: Mon Jul 1 00:04:14 2019 -0400
> >
> > rcu: Simplify rcu_note_context_switch exit from critical section
> >
> > Because __rcu_read_unlock() can be preempted just before the call to
> > rcu_read_unlock_special(), it is possible for a task to be preempted just
> > before it would have fully exited its RCU read-side critical section.
> > This would result in a needless extension of that critical section until
> > that task was resumed, which might in turn result in a needlessly
> > long grace period, needless RCU priority boosting, and needless
> > force-quiescent-state actions. Therefore, rcu_note_context_switch()
> > invokes __rcu_read_unlock() followed by rcu_preempt_deferred_qs() when
> > it detects this situation. This action by rcu_note_context_switch()
> > ends the RCU read-side critical section immediately.
> >
> > Of course, once the task resumes, it will invoke rcu_read_unlock_special()
> > redundantly. This is harmless because the fact that a preemption
> > happened means that interrupts, preemption, and softirqs cannot
> > have been disabled, so there would be no deferred quiescent state.
> > While ->rcu_read_lock_nesting remains less than zero, none of the
> > ->rcu_read_unlock_special.b bits can be set, and they were all zeroed by
> > the call to rcu_note_context_switch() at task-preemption time. Therefore,
> > setting ->rcu_read_unlock_special.b.exp_hint to false has no effect.
> >
> > Therefore, the extra call to rcu_preempt_deferred_qs_irqrestore()
> > would return immediately. With one possible exception, which is
> > if an expedited grace period started just as the task was being
> > resumed, which could leave ->exp_deferred_qs set. This will cause
> > rcu_preempt_deferred_qs_irqrestore() to invoke rcu_report_exp_rdp(),
> > reporting the quiescent state, just as it should. (Such an expedited
> > grace period won't affect the preemption code path due to interrupts
> > having already been disabled.)
> >
> > But when rcu_note_context_switch() invokes __rcu_read_unlock(), it
> > is doing so with preemption disabled, hence __rcu_read_unlock() will
> > unconditionally defer the quiescent state, only to immediately invoke
> > rcu_preempt_deferred_qs(), thus immediately reporting the deferred
> > quiescent state. It turns out to be safe (and faster) to instead
> > just invoke rcu_preempt_deferred_qs() without the __rcu_read_unlock()
> > middleman.
> >
> > Because this is the invocation during the preemption (as opposed to
> > the invocation just after the resume), at least one of the bits in
> > ->rcu_read_unlock_special.b must be set and ->rcu_read_lock_nesting
> > must be negative. This means that rcu_preempt_need_deferred_qs() must
> > return true, avoiding the early exit from rcu_preempt_deferred_qs().
> > Thus, rcu_preempt_deferred_qs_irqrestore() will be invoked immediately,
> > as required.
> >
> > This commit therefore simplifies the CONFIG_PREEMPT=y version of
> > rcu_note_context_switch() by removing the "else if" branch of its
> > "if" statement. This change means that all callers that would have
> > invoked rcu_read_unlock_special() followed by rcu_preempt_deferred_qs()
> > will now simply invoke rcu_preempt_deferred_qs(), thus avoiding the
> > rcu_read_unlock_special() middleman when __rcu_read_unlock() is preempted.
> >
> > Cc: rcu@xxxxxxxxxxxxxxx
> > Cc: kernel-team@xxxxxxxxxxx
> > Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx>
> > Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxx>
> >
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 187dc076c497..214e4689c29d 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -313,15 +313,6 @@ void rcu_note_context_switch(bool preempt)
> > ? rnp->gp_seq
> > : rcu_seq_snap(&rnp->gp_seq));
> > rcu_preempt_ctxt_queue(rnp, rdp);
> > - } else if (t->rcu_read_lock_nesting < 0 &&
> > - t->rcu_read_unlock_special.s) {
> > -
> > - /*
> > - * Complete exit from RCU read-side critical section on
> > - * behalf of preempted instance of __rcu_read_unlock().
> > - */
> > - rcu_read_unlock_special(t);
> > - rcu_preempt_deferred_qs(t);
> > } else {
> > rcu_preempt_deferred_qs(t);
> > }
> >
>