Re: [PATCH v2] tracing: Make sure RCU is watching before calling a stack trace

From: Steven Rostedt
Date: Thu May 18 2017 - 10:50:48 EST


On Thu, 18 May 2017 09:38:09 -0400
Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:

> From: "Steven Rostedt (VMware)" <rostedt@xxxxxxxxxxx>
>
> As stack tracing now requires "rcu watching", force RCU to be watching when
> recording a stack trace.
>
> Link: http://lkml.kernel.org/r/20170512172449.879684501@xxxxxxxxxxx
>
> Cc: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>

Paul,

Can you give me an ack for this version too.

Thanks!

-- Steve

> Signed-off-by: Steven Rostedt (VMware) <rostedt@xxxxxxxxxxx>
> ---
>
> Changes since v1:
>
> My testing discovered that the stack trace can be called with
> interrupts enabled, which is a no no to have when calling
> rcu_irq_enter(). When interrupts are enabled, as with being in an
> NMI, RCU will also be watching.
>
> kernel/trace/trace.c | 26 +++++++++++++++++++++++++-
> 1 file changed, 25 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
> index fcc9a2d..34a98ba 100644
> --- a/kernel/trace/trace.c
> +++ b/kernel/trace/trace.c
> @@ -2568,7 +2568,31 @@ static inline void ftrace_trace_stack(struct trace_array *tr,
> void __trace_stack(struct trace_array *tr, unsigned long flags, int skip,
> int pc)
> {
> - __ftrace_trace_stack(tr->trace_buffer.buffer, flags, skip, pc, NULL);
> + struct ring_buffer *buffer = tr->trace_buffer.buffer;
> +
> + /*
> + * When an NMI triggers, RCU is enabled via rcu_nmi_enter()
> + * Also, RCU is always enabled when interrupts are.
> + */
> + if (!irqs_disabled() || in_nmi()) {
> + __ftrace_trace_stack(buffer, flags, skip, pc, NULL);
> + return;
> + }
> +
> + /*
> + * It is possible that a function is being traced in a
> + * location that RCU is not watching. A call to
> + * rcu_irq_enter() will make sure that it is, but there's
> + * a few internal rcu functions that could be traced
> + * where that wont work either. In those cases, we just
> + * do nothing.
> + */
> + if (unlikely(rcu_irq_enter_disabled()))
> + return;
> +
> + rcu_irq_enter();
> + __ftrace_trace_stack(buffer, flags, skip, pc, NULL);
> + rcu_irq_exit();
> }
>
> /**