[PATCH] ftrace: Prevent RCU stall on PREEMPT_VOLUNTARY kernels

From: Guilherme G. Piccoli
Date: Tue Nov 15 2022 - 15:49:20 EST


The function match_records() may take a while due to a large
number of string comparisons, so when in PREEMPT_VOLUNTARY
kernels we could face RCU stalls due to that.

Add a cond_resched() to prevent that.

Suggested-by: Steven Rostedt <rostedt@xxxxxxxxxxx>
Acked-by: Paul E. McKenney <paulmck@xxxxxxxxxx> # from RCU CPU stall warning perspective
Cc: Masami Hiramatsu <mhiramat@xxxxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Signed-off-by: Guilherme G. Piccoli <gpiccoli@xxxxxxxxxx>
---

Hi Steve / Paul, thanks for the discussions on the first thread [0],
much appreciated! Here is the "official" version.

Steve: lemme know if it's good for you, and in case you prefer to
send it yourself (since you proposed it on IRC), fine by me!

Paul: kept your ACK (thanks for that BTW) even though I changed the
place of cond_resched() to align with Steve's preference. Lemme know
in case you want to drop this ACK.

Cheers,

Guilherme


[0] https://lore.kernel.org/lkml/1ef5fe19-a82f-835e-fda5-455e9c2b94b4@xxxxxxxxxx/


kernel/trace/ftrace.c | 1 +
1 file changed, 1 insertion(+)

diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
index 7dc023641bf1..80639bdb85f6 100644
--- a/kernel/trace/ftrace.c
+++ b/kernel/trace/ftrace.c
@@ -4192,6 +4192,7 @@ match_records(struct ftrace_hash *hash, char *func, int len, char *mod)
}
found = 1;
}
+ cond_resched();
} while_for_each_ftrace_rec();
out_unlock:
mutex_unlock(&ftrace_lock);
--
2.38.0