Re: [PATCH] tracing: Add trace_trigger kernel command line option

From: Paul E. McKenney
Date: Thu Oct 20 2022 - 18:21:19 EST


On Thu, Oct 20, 2022 at 02:53:40PM -0400, Steven Rostedt wrote:
> On Thu, 20 Oct 2022 10:55:12 -0700
> "Paul E. McKenney" <paulmck@xxxxxxxxxx> wrote:
>
> > > synchronize_rcu() {
> > > rcu_poll_gp_seq_start_unlocked() {
> > > struct rcu_node *rnp = rcu_get_root();
> > >
> > > if (rcu_init_invoked()) {
> > > lockdep_assert_irqs_enabled();
> > > raw_spin_lock_irq_rcu_node(rnp);
> > > }
> > > rcu_poll_gp_seq_start(snap);
> > > if (rcu_init_invoked())
> > > raw_spin_unlock_irq_rcu_node(rnp); <<-- Interrupts enabled here
> > > }
> >
> > Thank you for digging into this!
> >
> > Does the following help?
>
> It does indeed!
>
> I can remove the early_boot_irqs_disabled checks if you can add this to
> this rc cycle and mark it stable.

My plan is to push it in after a bit of testing. The Fixes tag should
make the right -stable stuff happen.

Thanx, Paul

------------------------------------------------------------------------

commit 952643575669332565c77066fb42d67987680f12
Author: Paul E. McKenney <paulmck@xxxxxxxxxx>
Date: Thu Oct 20 10:58:14 2022 -0700

rcu: Keep synchronize_rcu() from enabling irqs in early boot

Making polled RCU grace periods account for expedited grace periods
required acquiring the leaf rcu_node structure's lock during early boot,
but after rcu_init() was called. This lock is irq-disabled, but the
code incorrectly assumes that irqs are always disabled when invoking
synchronize_rcu(). The exception is early boot before the scheduler has
started, which means that upon return from synchronize_rcu(), irqs will
be incorrectly enabled.

This commit fixes this bug by using irqsave/irqrestore locking primitives.

Fixes: bf95b2bc3e42 ("rcu: Switch polled grace-period APIs to ->gp_seq_polled")

Reported-by: Steven Rostedt <rostedt@xxxxxxxxxxx>
Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxx>

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index abc615808b6e8..45ad9ef00d647 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1425,30 +1425,32 @@ static void rcu_poll_gp_seq_end(unsigned long *snap)
// where caller does not hold the root rcu_node structure's lock.
static void rcu_poll_gp_seq_start_unlocked(unsigned long *snap)
{
+ unsigned long flags;
struct rcu_node *rnp = rcu_get_root();

if (rcu_init_invoked()) {
lockdep_assert_irqs_enabled();
- raw_spin_lock_irq_rcu_node(rnp);
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_start(snap);
if (rcu_init_invoked())
- raw_spin_unlock_irq_rcu_node(rnp);
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}

// Make the polled API aware of the end of a grace period, but where
// caller does not hold the root rcu_node structure's lock.
static void rcu_poll_gp_seq_end_unlocked(unsigned long *snap)
{
+ unsigned long flags;
struct rcu_node *rnp = rcu_get_root();

if (rcu_init_invoked()) {
lockdep_assert_irqs_enabled();
- raw_spin_lock_irq_rcu_node(rnp);
+ raw_spin_lock_irqsave_rcu_node(rnp, flags);
}
rcu_poll_gp_seq_end(snap);
if (rcu_init_invoked())
- raw_spin_unlock_irq_rcu_node(rnp);
+ raw_spin_unlock_irqrestore_rcu_node(rnp, flags);
}

/*