RE: [PATCH] vdpa: conditionally fill max max queue pair for stats

From: Eli Cohen
Date: Wed Sep 07 2022 - 04:11:56 EST


> From: Jason Wang <jasowang@xxxxxxxxxx>
> Sent: Wednesday, 7 September 2022 9:53
> To: Eli Cohen <elic@xxxxxxxxxx>
> Cc: mst@xxxxxxxxxx; virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx
> Subject: Re: [PATCH] vdpa: conditionally fill max max queue pair for stats
>
> On Wed, Sep 7, 2022 at 2:11 PM Eli Cohen <elic@xxxxxxxxxx> wrote:
> >
> > > From: Jason Wang <jasowang@xxxxxxxxxx>
> > > Sent: Wednesday, 7 September 2022 9:01
> > > To: mst@xxxxxxxxxx; jasowang@xxxxxxxxxx; Eli Cohen
> <elic@xxxxxxxxxx>;
> > > virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx
> > > Subject: [PATCH] vdpa: conditionally fill max max queue pair for stats
> > >
> > > For the device without multiqueue feature, we will read 0 as
> > > max_virtqueue_pairs from the config.
> > If this is the case for other vdpa vendor drivers, shouldn't we fix it there?
> After all,
> > config->max_virtqueue_pairs should always show valid values.
>
> Not for the case when the device doesn't offer MQ. According to the
> spec, the max_virtqueue_pairs doesn't exist in this case.
>
I see, thanks.

> >
> > > So if we fill
> > > VDPA_ATTR_DEV_NET_CFG_MAX_VQP with the value we read from the
> > > config
> > > we will confuse the user.
> > >
> > > Fixing this by only filling the value when multiqueue is offered by
> > > the device so userspace can assume 1 when the attr is not provided.
> > >
> > > Fixes: 13b00b135665c("vdpa: Add support for querying vendor
> statistics")
> > > Cc: Eli Cohen <elic@xxxxxxxxxx>
> > > Signed-off-by: Jason Wang <jasowang@xxxxxxxxxx>
> > > ---
> > > drivers/vdpa/vdpa.c | 9 ++++-----
> > > 1 file changed, 4 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/drivers/vdpa/vdpa.c b/drivers/vdpa/vdpa.c
> > > index c06c02704461..bc328197263f 100644
> > > --- a/drivers/vdpa/vdpa.c
> > > +++ b/drivers/vdpa/vdpa.c
> > > @@ -894,7 +894,6 @@ static int vdpa_fill_stats_rec(struct vdpa_device
> > > *vdev, struct sk_buff *msg,
> > > {
> > > struct virtio_net_config config = {};
> > > u64 features;
> > > - u16 max_vqp;
> > > u8 status;
> > > int err;
> > >
> > > @@ -905,15 +904,15 @@ static int vdpa_fill_stats_rec(struct
> vdpa_device
> > > *vdev, struct sk_buff *msg,
> > > }
> > > vdpa_get_config_unlocked(vdev, 0, &config, sizeof(config));
> > >
> > > - max_vqp = __virtio16_to_cpu(true, config.max_virtqueue_pairs);
> > > - if (nla_put_u16(msg, VDPA_ATTR_DEV_NET_CFG_MAX_VQP,
> > > max_vqp))
> > > - return -EMSGSIZE;
> > > -
> > > features = vdev->config->get_driver_features(vdev);
> > > if (nla_put_u64_64bit(msg,
> > > VDPA_ATTR_DEV_NEGOTIATED_FEATURES,
> > > features, VDPA_ATTR_PAD))
> > > return -EMSGSIZE;
> > >
> > > + err = vdpa_dev_net_mq_config_fill(vdev, msg, features, &config);
> > > + if (err)
> > > + return err;
> > > +
> >
> > So that means that you can't read statistics when MQ is not supported. Is
> this worth sacrificing?
>
> vdpa_dev_net_mq_config_fill() will return 0 in the case of !MQ, so it
> should still work.

Right, missed that.

Reviewed-by: Eli Cohen <elic@xxxxxxxxxx>

>
> Thanks
>
>
> >
> > > if (nla_put_u32(msg, VDPA_ATTR_DEV_QUEUE_INDEX, index))
> > > return -EMSGSIZE;
> > >
> > > --
> > > 2.25.1
> >