Re: [PATCH net-next V4 2/3] virtio_net: support per queue interrupt coalesce command

From: Paolo Abeni
Date: Thu Jul 27 2023 - 09:29:24 EST


On Tue, 2023-07-25 at 16:07 +0300, Gavin Li wrote:
> Add interrupt_coalesce config in send_queue and receive_queue to cache user
> config.
>
> Send per virtqueue interrupt moderation config to underlying device in
> order to have more efficient interrupt moderation and cpu utilization of
> guest VM.
>
> Additionally, address all the VQs when updating the global configuration,
> as now the individual VQs configuration can diverge from the global
> configuration.
>
> Signed-off-by: Gavin Li <gavinl@xxxxxxxxxx>
> Reviewed-by: Dragos Tatulea <dtatulea@xxxxxxxxxx>
> Reviewed-by: Jiri Pirko <jiri@xxxxxxxxxx>
> Acked-by: Michael S. Tsirkin <mst@xxxxxxxxxx>

FTR, this patch is significantly different from the version previously
acked/reviewed, I'm unsure if all the reviewers are ok with the new
one.

[...]

> static int virtnet_set_coalesce(struct net_device *dev,
> struct ethtool_coalesce *ec,
> struct kernel_ethtool_coalesce *kernel_coal,
> struct netlink_ext_ack *extack)
> {
> struct virtnet_info *vi = netdev_priv(dev);
> - int ret, i, napi_weight;
> + int ret, queue_number, napi_weight;
> bool update_napi = false;
>
> /* Can't change NAPI weight if the link is up */
> napi_weight = ec->tx_max_coalesced_frames ? NAPI_POLL_WEIGHT : 0;
> - if (napi_weight ^ vi->sq[0].napi.weight) {
> - if (dev->flags & IFF_UP)
> - return -EBUSY;
> - else
> - update_napi = true;
> + for (queue_number = 0; queue_number < vi->max_queue_pairs; queue_number++) {
> + ret = virtnet_should_update_vq_weight(dev->flags, napi_weight,
> + vi->sq[queue_number].napi.weight,
> + &update_napi);
> + if (ret)
> + return ret;
> +
> + if (update_napi) {
> + /* All queues that belong to [queue_number, queue_count] will be
> + * updated for the sake of simplicity, which might not be necessary

It looks like the comment above still refers to the old code. Should
be:
[queue_number, vi->max_queue_pairs]

Otherwise LGTM, thanks!

Paolo