Re: [net-next v1 09/16] page_pool: device memory support

From: Yunsheng Lin
Date: Fri Dec 08 2023 - 04:30:45 EST


On 2023/12/8 8:52, Mina Almasry wrote:
> Overload the LSB of struct page* to indicate that it's a page_pool_iov.
>
> Refactor mm calls on struct page* into helpers, and add page_pool_iov
> handling on those helpers. Modify callers of these mm APIs with calls to
> these helpers instead.
>
> In areas where struct page* is dereferenced, add a check for special
> handling of page_pool_iov.
>
> Signed-off-by: Mina Almasry <almasrymina@xxxxxxxxxx>
>
> ---
>
> v1:
> - Disable fragmentation support for iov properly.
> - fix napi_pp_put_page() path (Yunsheng).
>
> ---
> include/net/page_pool/helpers.h | 78 ++++++++++++++++++++++++++++++++-
> net/core/page_pool.c | 67 ++++++++++++++++++++--------
> net/core/skbuff.c | 28 +++++++-----
> 3 files changed, 141 insertions(+), 32 deletions(-)
>
> diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
> index 00197f14aa87..2d4e0a2c5620 100644
> --- a/include/net/page_pool/helpers.h
> +++ b/include/net/page_pool/helpers.h
> @@ -154,6 +154,64 @@ static inline struct page_pool_iov *page_to_page_pool_iov(struct page *page)
> return NULL;
> }
>
> +static inline int page_pool_page_ref_count(struct page *page)
> +{
> + if (page_is_page_pool_iov(page))

As mentioned before, it seems we need to have the above checking every
time we need to do some per-page handling in page_pool core, is there
a plan in your mind how to remove those kind of checking in the future?

Even though a static_branch check is added in page_is_page_pool_iov(), it
does not make much sense that a core has tow different 'struct' for its
most basic data.

IMHO, the ppiov for dmabuf is forced fitting into page_pool without much
design consideration at this point.

> + return page_pool_iov_refcount(page_to_page_pool_iov(page));
> +
> + return page_ref_count(page);
> +}
> +

...

> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index b157efea5dea..07f802f1adf1 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -896,19 +896,23 @@ bool napi_pp_put_page(struct page *page, bool napi_safe)
> bool allow_direct = false;
> struct page_pool *pp;
>
> - page = compound_head(page);
> -
> - /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
> - * in order to preserve any existing bits, such as bit 0 for the
> - * head page of compound page and bit 1 for pfmemalloc page, so
> - * mask those bits for freeing side when doing below checking,
> - * and page_is_pfmemalloc() is checked in __page_pool_put_page()
> - * to avoid recycling the pfmemalloc page.
> - */
> - if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
> - return false;
> + if (!page_is_page_pool_iov(page)) {

For now, the above may work for the the rx part as it seems that you are
only enabling rx for dmabuf for now.

What is the plan to enable tx for dmabuf? If it is also intergrated into
page_pool? There was a attempt to enable page_pool for tx, Eric seemed to
have some comment about this:
https://lkml.kernel.org/netdev/2cf4b672-d7dc-db3d-ce90-15b4e91c4005@xxxxxxxxxx/T/#mb6ab62dc22f38ec621d516259c56dd66353e24a2

If tx is not intergrated into page_pool, do we need to create a new layer for
the tx dmabuf?

> + page = compound_head(page);
> +
> + /* page->pp_magic is OR'ed with PP_SIGNATURE after the allocation
> + * in order to preserve any existing bits, such as bit 0 for the
> + * head page of compound page and bit 1 for pfmemalloc page, so
> + * mask those bits for freeing side when doing below checking,
> + * and page_is_pfmemalloc() is checked in __page_pool_put_page()
> + * to avoid recycling the pfmemalloc page.
> + */
> + if (unlikely((page->pp_magic & ~0x3UL) != PP_SIGNATURE))
> + return false;
>
> - pp = page->pp;
> + pp = page->pp;
> + } else {
> + pp = page_to_page_pool_iov(page)->pp;
> + }
>
> /* Allow direct recycle if we have reasons to believe that we are
> * in the same context as the consumer would run, so there's
>