Re: [PATCH net-next v2 4/4] net: lan96x: Use page_pool API

From: Horatiu Vultur
Date: Wed Nov 09 2022 - 02:22:46 EST


The 11/08/2022 12:33, Alexander Lobakin wrote:
>
> From: Horatiu Vultur <horatiu.vultur@xxxxxxxxxxxxx>
> Date: Mon, 7 Nov 2022 22:35:21 +0100
>
> > The 11/07/2022 17:40, Alexander Lobakin wrote:
> >
> > Hi Olek,
> >
> > >
> > > From: Horatiu Vultur <horatiu.vultur@xxxxxxxxxxxxx>
> > > Date: Sun, 6 Nov 2022 22:11:54 +0100
> > >
> > > > Use the page_pool API for allocation, freeing and DMA handling instead
> > > > of dev_alloc_pages, __free_pages and dma_map_page.
> > > >
> > > > Signed-off-by: Horatiu Vultur <horatiu.vultur@xxxxxxxxxxxxx>
> > > > ---
> > > > .../net/ethernet/microchip/lan966x/Kconfig | 1 +
> > > > .../ethernet/microchip/lan966x/lan966x_fdma.c | 72 ++++++++++---------
> > > > .../ethernet/microchip/lan966x/lan966x_main.h | 3 +
> > > > 3 files changed, 43 insertions(+), 33 deletions(-)
> > >
> > > [...]
> > >
> > > > @@ -84,6 +62,27 @@ static void lan966x_fdma_rx_add_dcb(struct lan966x_rx *rx,
> > > > rx->last_entry = dcb;
> > > > }
> > > >
> > > > +static int lan966x_fdma_rx_alloc_page_pool(struct lan966x_rx *rx)
> > > > +{
> > > > + struct lan966x *lan966x = rx->lan966x;
> > > > + struct page_pool_params pp_params = {
> > > > + .order = rx->page_order,
> > > > + .flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> > > > + .pool_size = FDMA_DCB_MAX,
> > > > + .nid = NUMA_NO_NODE,
> > > > + .dev = lan966x->dev,
> > > > + .dma_dir = DMA_FROM_DEVICE,
> > > > + .offset = 0,
> > > > + .max_len = PAGE_SIZE << rx->page_order,
> > >
> > > ::max_len's primary purpose is to save time on DMA syncs.
> > > First of all, you can substract
> > > `SKB_DATA_ALIGN(sizeof(struct skb_shared_info))`, your HW never
> > > writes to those last couple hundred bytes.
> > > But I suggest calculating ::max_len basing on your current MTU
> > > value. Let's say you have 16k pages and MTU of 1500, that is a huge
> > > difference (except your DMA is always coherent, but I assume that's
> > > not the case).
> > >
> > > In lan966x_fdma_change_mtu() you do:
> > >
> > > max_mtu = lan966x_fdma_get_max_mtu(lan966x);
> > > max_mtu += IFH_LEN_BYTES;
> > > max_mtu += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > > max_mtu += VLAN_HLEN * 2;
> > >
> > > `lan966x_fdma_get_max_mtu(lan966x) + IFH_LEN_BYTES + VLAN_HLEN * 2`
> > > (ie 1536 for the MTU of 1500) is your max_len value actually, given
> > > that you don't reserve any headroom (which is unfortunate, but I
> > > guess you're working on this already, since XDP requires
> > > %XDP_PACKET_HEADROOM).
> >
> > Thanks for the suggestion. I will try it.
> > Regarding XDP_PACKET_HEADROOM, for the XDP_DROP, I didn't see it to be
> > needed. Once the support for XDP_TX or XDP_REDIRECT is added, then yes I
> > need to reserve also the headroom.
>
> Correct, since you're disabling metadata support in
> xdp_prepare_buff(), headroom is not needed for pass and drop
> actions.
>
> It's always good to have at least %NET_SKB_PAD headroom inside an
> skb, so that networking stack won't perform excessive reallocations,
> and your code currently misses that -- if I understand currently,
> after converting hardware-specific header to an Ethernet header you
> have 28 - 14 = 14 bytes of headroom, which sometimes can be not
> enough for example for forwarding cases. It's not related to XDP,
> but I would do that as a prerequisite patch for Tx/redirect, since
> you'll be adding headroom support anyway :)

Just a small comment here. There is no need to convert hardware-specific
header, because after that header there is the ethernet header. So I
would have 28 bytes left for headroom, but that is still less then
NET_SKB_PAD.
But I got the idea. When I will add the Tx/redirect, one of those
patches will be to make sure we have enough headroom.

>
> >
> > >
> > > > + };
> > > > +
> > > > + rx->page_pool = page_pool_create(&pp_params);
> > > > + if (IS_ERR(rx->page_pool))
> > > > + return PTR_ERR(rx->page_pool);
>
> [...]
>
> > > > --
> > > > 2.38.0
> > >
> > > Thanks,
> > > Olek
> >
> > --
> > /Horatiu
>
> Thanks,
> Olek

--
/Horatiu