Re: [PATCH v1] rcu: Fix and improve RCU read lock checks when !CONFIG_DEBUG_LOCK_ALLOC

From: Paul E. McKenney
Date: Thu Jul 13 2023 - 14:15:08 EST


On Fri, Jul 14, 2023 at 12:09:27AM +0800, Alan Huang wrote:
>
> > 2023年7月13日 23:33,Joel Fernandes <joel@xxxxxxxxxxxxxxxxx> 写道:
> >
> > On Thu, Jul 13, 2023 at 10:34 AM Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> wrote:
> >>
> >>
> >>
> >> On 2023/7/13 22:07, Joel Fernandes wrote:
> >>> On Thu, Jul 13, 2023 at 12:59 AM Gao Xiang <hsiangkao@xxxxxxxxxxxxxxxxx> wrote:
> >>>> On 2023/7/13 12:52, Paul E. McKenney wrote:
> >>>>> On Thu, Jul 13, 2023 at 12:41:09PM +0800, Gao Xiang wrote:
> >>>>>>
> >>>>>>
> >>>>
> >>>> ...
> >>>>
> >>>>>>
> >>>>>> There are lots of performance issues here and even a plumber
> >>>>>> topic last year to show that, see:
> >>>>>>
> >>>>>> [1] https://lore.kernel.org/r/20230519001709.2563-1-tj@xxxxxxxxxx
> >>>>>> [2] https://lore.kernel.org/r/CAHk-=wgE9kORADrDJ4nEsHHLirqPCZ1tGaEPAZejHdZ03qCOGg@xxxxxxxxxxxxxx
> >>>>>> [3] https://lore.kernel.org/r/CAB=BE-SBtO6vcoyLNA9F-9VaN5R0t3o_Zn+FW8GbO6wyUqFneQ@xxxxxxxxxxxxxx
> >>>>>> [4] https://lpc.events/event/16/contributions/1338/
> >>>>>> and more.
> >>>>>>
> >>>>>> I'm not sure if it's necessary to look info all of that,
> >>>>>> andSandeep knows more than I am (the scheduling issue
> >>>>>> becomes vital on some aarch64 platform.)
> >>>>>
> >>>>> Hmmm... Please let me try again.
> >>>>>
> >>>>> Assuming that this approach turns out to make sense, the resulting
> >>>>> patch will need to clearly state the performance benefits directly in
> >>>>> the commit log.
> >>>>>
> >>>>> And of course, for the approach to make sense, it must avoid breaking
> >>>>> the existing lockdep-RCU debugging code.
> >>>>>
> >>>>> Is that more clear?
> >>>>
> >>>> Personally I'm not working on Android platform any more so I don't
> >>>> have a way to reproduce, hopefully Sandeep could give actually
> >>>> number _again_ if dm-verity is enabled and trigger another
> >>>> workqueue here and make a comparsion why the scheduling latency of
> >>>> the extra work becomes unacceptable.
> >>>>
> >>>
> >>> Question from my side, are we talking about only performance issues or
> >>> also a crash? It appears z_erofs_decompress_pcluster() takes
> >>> mutex_lock(&pcl->lock);
> >>>
> >>> So if it is either in an RCU read-side critical section or in an
> >>> atomic section, like the softirq path, then it may
> >>> schedule-while-atomic or trigger RCU warnings.
> >>>
> >>> z_erofs_decompressqueue_endio
> >>> -> z_erofs_decompress_kickoff
> >>> ->z_erofs_decompressqueue_work
> >>> ->z_erofs_decompress_queue
> >>> -> z_erofs_decompress_pcluster
> >>> -> mutex_lock
> >>>
> >>
> >> Why does the softirq path not trigger a workqueue instead?
> >
> > I said "if it is". I was giving a scenario. mutex_lock() is not
> > allowed in softirq context or in an RCU-reader.
> >
> >>> Per Sandeep in [1], this stack happens under RCU read-lock in:
> >>>
> >>> #define __blk_mq_run_dispatch_ops(q, check_sleep, dispatch_ops) \
> >>> [...]
> >>> rcu_read_lock();
> >>> (dispatch_ops);
> >>> rcu_read_unlock();
> >>> [...]
> >>>
> >>> Coming from:
> >>> blk_mq_flush_plug_list ->
> >>> blk_mq_run_dispatch_ops(q,
> >>> __blk_mq_flush_plug_list(q, plug));
> >>>
> >>> and __blk_mq_flush_plug_list does this:
> >>> q->mq_ops->queue_rqs(&plug->mq_list);
> >>>
> >>> This somehow ends up calling the bio_endio and the
> >>> z_erofs_decompressqueue_endio which grabs the mutex.
> >>>
> >>> So... I have a question, it looks like one of the paths in
> >>> __blk_mq_run_dispatch_ops() uses SRCU. Where are as the alternate
> >>> path uses RCU. Why does this alternate want to block even if it is not
> >>> supposed to? Is the real issue here that the BLK_MQ_F_BLOCKING should
> >>> be set? It sounds like you want to block in the "else" path even
> >>> though BLK_MQ_F_BLOCKING is not set:
> >>
> >> BLK_MQ_F_BLOCKING is not a flag that a filesystem can do anything with.
> >> That is block layer and mq device driver stuffs. filesystems cannot set
> >> this value.
> >>
> >> As I said, as far as I understand, previously,
> >> .end_io() can only be called without RCU context, so it will be fine,
> >> but I don't know when .end_io() can be called under some RCU context
> >> now.
> >
> > From what Sandeep described, the code path is in an RCU reader. My
> > question is more, why doesn't it use SRCU instead since it clearly
> > does so if BLK_MQ_F_BLOCKING. What are the tradeoffs? IMHO, a deeper
> > dive needs to be made into that before concluding that the fix is to
> > use rcu_read_lock_any_held().
>
> Copied from [1]:
>
> "Background: Historically erofs would always schedule a kworker for
> decompression which would incur the scheduling cost regardless of
> the context. But z_erofs_decompressqueue_endio() may not always
> be in atomic context and we could actually benefit from doing the
> decompression in z_erofs_decompressqueue_endio() if we are in
> thread context, for example when running with dm-verity.
> This optimization was later added in patch [2] which has shown
> improvement in performance benchmarks.”
>
> I’m not sure if it is a design issue.

I have no official opinion myself, but there are quite a few people
who firmly believe that any situation like this one (where driver or
file-system code needs to query the current context to see if blocking
is OK) constitutes a design flaw. Such people might argue that this
code path should have a clearly documented context, and that if that
documentation states that the code might be in atomic context, then the
driver/fs should assume atomic context. Alternatively, if driver/fs
needs the context to be non-atomic, the callers should make it so.

See for example in_atomic() and its comment header:

/*
* Are we running in atomic context? WARNING: this macro cannot
* always detect atomic context; in particular, it cannot know about
* held spinlocks in non-preemptible kernels. Thus it should not be
* used in the general case to determine whether sleeping is possible.
* Do not use in_atomic() in driver code.
*/
#define in_atomic() (preempt_count() != 0)

In the immortal words of Dan Frye, this should be good clean fun! ;-)

Thanx, Paul

> [1] https://lore.kernel.org/all/20230621220848.3379029-1-dhavale@xxxxxxxxxx/
>
> >
> > - Joel
>
>