Re: [PATCH net-next v2] ethtool: ice: Support for RSS settings to GTP from ethtool

From: takeru hayasaka
Date: Tue Oct 17 2023 - 21:53:26 EST


Hi Jakub san

Thank you for your continued review!

> I may be wrong (this API predates my involvement in Linux by a decade)
> but I think that the current ethtool API is not all that precise in
> terms of exact packet headers.
>
> For example the TCPv6 flow includes IPv6 and TCP headers, but the
> packet may or may not have any number of encapsulation headers in place.
> VLAN, VXLAN, GENEVE etc. If the NIC can parse them - it will extract
> the inner-most IPv6 and TCP src/dst and hash on that.
>
> In a way TCP or IP headers may also differ by e.g. including options.
> But as long as the fields we care about (source / dst) are in place,
> we treat all variants of the header the same.
>
> The question really is how much we should extend this sort of thinking
> to GTP and say - we treat all GTP flows with extractable TEID the same;
> and how much the user can actually benefit from controlling particular
> sub-category of GTP flows. Or knowing that NIC supports a particular
> sub-category.
>
> Let's forget about capabilities of Intel NICs for now - can you as a
> user think of practical use cases where we'd want to turn on hashing
> based on TEID for, e.g. gtpu6 and not gtpc6?

of course!
There are clearly cases where we would want to use gtpu4|6 instead of gtpc4|6.

For instance, there are PGWs that have the capability to separate the
termination of communication of 4G LTE users into Control and User
planes (C/U).
This is quite convenient from a scalability perspective. In fact, in
5G UPF, the communication is explicitly only on the User plane
(Uplane).

Therefore, services are expected to receive only GTPU traffic (e.g.,
PGW-U, UPF) or only GTPC traffic (e.g., PGW-C). Hence, there arises a
necessity to use only GTPU.

If we do not distinguish packets into Control/User (C/U) with options
like gtp4|6, I can conceive scenarios where performance tuning becomes
challenging.
For example, in cases where we want to process only the control
communication (GTPC) using Flow Director on specific CPUs, while
processing GTPU on the remaining cores.
In scenarios like IoT, where user communication is minimal but the
volume of devices is vast, the control traffic could substantially
increase. Thus, this might also be possible in reverse.
In short, this pertains to being mindful of CPU core affinity.

If we were to propose again, setting aside considerations specific to
Intel, I believe, considering the users of ethtool, the smallest units
should be gtpu4|6 and gtpc4|6.
Regarding Extension Headers and such, I think it would be more
straightforward to handle them implicitly.

What does everyone else think?

2023年10月18日(水) 8:49 Jakub Kicinski <kuba@xxxxxxxxxx>:
>
> On Tue, 17 Oct 2023 23:37:57 +0900 takeru hayasaka wrote:
> > > Are there really deployments where the *very limited* GTP-C control
> > I also think that it should not be limited to GTP-C. However, as I
> > wrote in the email earlier, all the flows written are different in
> > packet structure, including GTP-C. In the semantics of ethtool, I
> > thought it was correct to pass a fixed packet structure and the
> > controllable parameters for it. At least, the Intel ice driver that I
> > modified is already like that.
>
> I may be wrong (this API predates my involvement in Linux by a decade)
> but I think that the current ethtool API is not all that precise in
> terms of exact packet headers.
>
> For example the TCPv6 flow includes IPv6 and TCP headers, but the
> packet may or may not have any number of encapsulation headers in place.
> VLAN, VXLAN, GENEVE etc. If the NIC can parse them - it will extract
> the inner-most IPv6 and TCP src/dst and hash on that.
>
> In a way TCP or IP headers may also differ by e.g. including options.
> But as long as the fields we care about (source / dst) are in place,
> we treat all variants of the header the same.
>
> The question really is how much we should extend this sort of thinking
> to GTP and say - we treat all GTP flows with extractable TEID the same;
> and how much the user can actually benefit from controlling particular
> sub-category of GTP flows. Or knowing that NIC supports a particular
> sub-category.
>
> Let's forget about capabilities of Intel NICs for now - can you as a
> user think of practical use cases where we'd want to turn on hashing
> based on TEID for, e.g. gtpu6 and not gtpc6?