Re: [PATCH net-next v4 5/6] page_pool: add a lockdep check for recycling in hardirq

From: Alexander Duyck
Date: Tue Aug 08 2023 - 15:15:17 EST


On Tue, Aug 8, 2023 at 6:16 AM Alexander Lobakin
<aleksander.lobakin@xxxxxxxxx> wrote:
>
> From: Alexander H Duyck <alexander.duyck@xxxxxxxxx>
> Date: Mon, 07 Aug 2023 07:48:54 -0700
>
> > On Fri, 2023-08-04 at 20:05 +0200, Alexander Lobakin wrote:
> >> From: Jakub Kicinski <kuba@xxxxxxxxxx>
> >>
> >> Page pool use in hardirq is prohibited, add debug checks
> >> to catch misuses. IIRC we previously discussed using
> >> DEBUG_NET_WARN_ON_ONCE() for this, but there were concerns
> >> that people will have DEBUG_NET enabled in perf testing.
> >> I don't think anyone enables lockdep in perf testing,
> >> so use lockdep to avoid pushback and arguing :)
> >>
> >> Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx>
> >> Acked-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx>
> >> Signed-off-by: Alexander Lobakin <aleksander.lobakin@xxxxxxxxx>
> >> ---
> >> include/linux/lockdep.h | 7 +++++++
> >> net/core/page_pool.c | 2 ++
> >> 2 files changed, 9 insertions(+)
> >>
> >> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> >> index 310f85903c91..dc2844b071c2 100644
> >> --- a/include/linux/lockdep.h
> >> +++ b/include/linux/lockdep.h
> >> @@ -625,6 +625,12 @@ do { \
> >> WARN_ON_ONCE(__lockdep_enabled && !this_cpu_read(hardirq_context)); \
> >> } while (0)
> >>
> >> +#define lockdep_assert_no_hardirq() \
> >> +do { \
> >> + WARN_ON_ONCE(__lockdep_enabled && (this_cpu_read(hardirq_context) || \
> >> + !this_cpu_read(hardirqs_enabled))); \
> >> +} while (0)
> >> +
> >> #define lockdep_assert_preemption_enabled() \
> >> do { \
> >> WARN_ON_ONCE(IS_ENABLED(CONFIG_PREEMPT_COUNT) && \
> >> @@ -659,6 +665,7 @@ do { \
> >> # define lockdep_assert_irqs_enabled() do { } while (0)
> >> # define lockdep_assert_irqs_disabled() do { } while (0)
> >> # define lockdep_assert_in_irq() do { } while (0)
> >> +# define lockdep_assert_no_hardirq() do { } while (0)
> >>
> >> # define lockdep_assert_preemption_enabled() do { } while (0)
> >> # define lockdep_assert_preemption_disabled() do { } while (0)
> >> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> >> index 03ad74d25959..77cb75e63aca 100644
> >> --- a/net/core/page_pool.c
> >> +++ b/net/core/page_pool.c
> >> @@ -587,6 +587,8 @@ static __always_inline struct page *
> >> __page_pool_put_page(struct page_pool *pool, struct page *page,
> >> unsigned int dma_sync_size, bool allow_direct)
> >> {
> >> + lockdep_assert_no_hardirq();
> >> +
> >> /* This allocator is optimized for the XDP mode that uses
> >> * one-frame-per-page, but have fallbacks that act like the
> >> * regular page allocator APIs.
> >
> > So two points.
> >
> > First could we look at moving this inside the if statement just before
> > we return the page, as there isn't a risk until we get into that path
> > of needing a lock.
> >
> > Secondly rather than returning an error is there any reason why we
> > couldn't just look at not returning page and instead just drop into the
> > release path which wouldn't take the locks in the first place? Either
>
> That is exception path to quickly catch broken drivers and fix them, why
> bother? It's not something we have to live with.

My concern is that the current "fix" consists of stalling a Tx ring.
We need to have a way to allow forward progress when somebody mixes
xdp_frame and skb traffic as I suspect we will end up with a number of
devices doing this since they cannot handle recycling the pages in
hardirq context.

The only reason why the skbs don't have the problem is that they are
queued and then cleaned up in the net_tx_action. That is why I wonder
if we shouldn't look at adding some sort of support for doing
something like that with xdp_frame as well. Something like a
dev_kfree_pp_page_any to go along with the dev_kfree_skb_any.