Re: [PATCH v4 1/5] tracing: Introduce faultable tracepoints

From: Paul E. McKenney
Date: Tue Nov 21 2023 - 10:58:49 EST


On Tue, Nov 21, 2023 at 03:36:47PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 21, 2023 at 09:06:18AM -0500, Mathieu Desnoyers wrote:
> > Task trace RCU fits a niche that has the following set of requirements/tradeoffs:
> >
> > - Allow page faults within RCU read-side (like SRCU),
> > - Has a low-overhead read lock-unlock (without the memory barrier overhead of SRCU),
> > - The tradeoff: Has a rather slow synchronize_rcu(), but tracers should not care about
> > that. Hence, this is not meant to be a generic replacement for SRCU.
> >
> > Based on my reading of https://lwn.net/Articles/253651/ , preemptible RCU is not a good
> > fit for the following reasons:
> >
> > - It disallows blocking within a RCU read-side on non-CONFIG_PREEMPT kernels,
>
> Your counter points are confused, we simply don't build preemptible RCU
> unless PREEMPT=y, but that could surely be fixed and exposed as a
> separate flavour.

It certainly used to be available as a separate flavor, but only in
CONFIG_PREEMPT=y kernels. In CONFIG_PREEMPT=n kernels, the API mapped
to the non-preemptible flavor, as in synchronize_sched() and friends.
And we need tracing in the full set of kernels.

> > - AFAIU the mmap_sem used within the page fault handler does not have priority inheritance.
>
> What's that got to do with anything?
>
> Still utterly confused about what task-tracing rcu is and how it is
> different from preemptible rcu.

Tasks Trace RCU allows general blocking in its readers, not just the
subject-to-priority-boosting blocking permitted within preemptible RCU
readers. Restrictions on the use of Tasks Trace RCU are in place to allow
getting away with this general blocking. Even systems generously endowed
with memory are not going to do well when the RCU grace period is blocked
on I/O, especially if that I/O is across a network to a slow file server.

Which means a separate RCU instance is needed. Which is Tasks Trace RCU.

Thanx, Paul