Re: [PATCH net-next v5 2/3] virtio/vsock: send credit update during setting SO_RCVLOWAT

From: Arseniy Krasnov
Date: Thu Nov 30 2023 - 10:50:05 EST




On 30.11.2023 17:11, Stefano Garzarella wrote:
> On Thu, Nov 30, 2023 at 08:58:58AM -0500, Michael S. Tsirkin wrote:
>> On Thu, Nov 30, 2023 at 04:43:34PM +0300, Arseniy Krasnov wrote:
>>>
>>>
>>> On 30.11.2023 16:42, Michael S. Tsirkin wrote:
>>> > On Thu, Nov 30, 2023 at 04:08:39PM +0300, Arseniy Krasnov wrote:
>>> >> Send credit update message when SO_RCVLOWAT is updated and it is bigger
>>> >> than number of bytes in rx queue. It is needed, because 'poll()' will
>>> >> wait until number of bytes in rx queue will be not smaller than
>>> >> SO_RCVLOWAT, so kick sender to send more data. Otherwise mutual hungup
>>> >> for tx/rx is possible: sender waits for free space and receiver is
>>> >> waiting data in 'poll()'.
>>> >>
>>> >> Signed-off-by: Arseniy Krasnov <avkrasnov@xxxxxxxxxxxxxxxxx>
>>> >> ---
>>> >>  Changelog:
>>> >>  v1 -> v2:
>>> >>   * Update commit message by removing 'This patch adds XXX' manner.
>>> >>   * Do not initialize 'send_update' variable - set it directly during
>>> >>     first usage.
>>> >>  v3 -> v4:
>>> >>   * Fit comment in 'virtio_transport_notify_set_rcvlowat()' to 80 chars.
>>> >>  v4 -> v5:
>>> >>   * Do not change callbacks order in transport structures.
>>> >>
>>> >>  drivers/vhost/vsock.c                   |  1 +
>>> >>  include/linux/virtio_vsock.h            |  1 +
>>> >>  net/vmw_vsock/virtio_transport.c        |  1 +
>>> >>  net/vmw_vsock/virtio_transport_common.c | 27 +++++++++++++++++++++++++
>>> >>  net/vmw_vsock/vsock_loopback.c          |  1 +
>>> >>  5 files changed, 31 insertions(+)
>>> >>
>>> >> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
>>> >> index f75731396b7e..4146f80db8ac 100644
>>> >> --- a/drivers/vhost/vsock.c
>>> >> +++ b/drivers/vhost/vsock.c
>>> >> @@ -451,6 +451,7 @@ static struct virtio_transport vhost_transport = {
>>> >>          .notify_buffer_size       = virtio_transport_notify_buffer_size,
>>> >>
>>> >>          .read_skb = virtio_transport_read_skb,
>>> >> +        .notify_set_rcvlowat      = virtio_transport_notify_set_rcvlowat
>>> >>      },
>>> >>
>>> >>      .send_pkt = vhost_transport_send_pkt,
>>> >> diff --git a/include/linux/virtio_vsock.h b/include/linux/virtio_vsock.h
>>> >> index ebb3ce63d64d..c82089dee0c8 100644
>>> >> --- a/include/linux/virtio_vsock.h
>>> >> +++ b/include/linux/virtio_vsock.h
>>> >> @@ -256,4 +256,5 @@ void virtio_transport_put_credit(struct virtio_vsock_sock *vvs, u32 credit);
>>> >>  void virtio_transport_deliver_tap_pkt(struct sk_buff *skb);
>>> >>  int virtio_transport_purge_skbs(void *vsk, struct sk_buff_head *list);
>>> >>  int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t read_actor);
>>> >> +int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, int val);
>>> >>  #endif /* _LINUX_VIRTIO_VSOCK_H */
>>> >> diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
>>> >> index af5bab1acee1..8007593a3a93 100644
>>> >> --- a/net/vmw_vsock/virtio_transport.c
>>> >> +++ b/net/vmw_vsock/virtio_transport.c
>>> >> @@ -539,6 +539,7 @@ static struct virtio_transport virtio_transport = {
>>> >>          .notify_buffer_size       = virtio_transport_notify_buffer_size,
>>> >>
>>> >>          .read_skb = virtio_transport_read_skb,
>>> >> +        .notify_set_rcvlowat      = virtio_transport_notify_set_rcvlowat
>>> >>      },
>>> >>
>>> >>      .send_pkt = virtio_transport_send_pkt,
>>> >> diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
>>> >> index f6dc896bf44c..1cb556ad4597 100644
>>> >> --- a/net/vmw_vsock/virtio_transport_common.c
>>> >> +++ b/net/vmw_vsock/virtio_transport_common.c
>>> >> @@ -1684,6 +1684,33 @@ int virtio_transport_read_skb(struct vsock_sock *vsk, skb_read_actor_t recv_acto
>>> >>  }
>>> >>  EXPORT_SYMBOL_GPL(virtio_transport_read_skb);
>>> >>
>>> >> +int virtio_transport_notify_set_rcvlowat(struct vsock_sock *vsk, >> int val)
>>> >> +{
>>> >> +    struct virtio_vsock_sock *vvs = vsk->trans;
>>> >> +    bool send_update;
>>> >> +
>>> >> +    spin_lock_bh(&vvs->rx_lock);
>>> >> +
>>> >> +    /* If number of available bytes is less than new SO_RCVLOWAT value,
>>> >> +     * kick sender to send more data, because sender may sleep in >> its
>>> >> +     * 'send()' syscall waiting for enough space at our side.
>>> >> +     */
>>> >> +    send_update = vvs->rx_bytes < val;
>>> >> +
>>> >> +    spin_unlock_bh(&vvs->rx_lock);
>>> >> +
>>> >> +    if (send_update) {
>>> >> +        int err;
>>> >> +
>>> >> +        err = virtio_transport_send_credit_update(vsk);
>>> >> +        if (err < 0)
>>> >> +            return err;
>>> >> +    }
>>> >> +
>>> >> +    return 0;
>>> >> +}
>>> >
>>> >
>>> > I find it strange that this will send a credit update
>>> > even if nothing changed since this was called previously.
>>> > I'm not sure whether this is a problem protocol-wise,
>>> > but it certainly was not envisioned when the protocol was
>>> > built. WDYT?
>>>
>>> >From virtio spec I found:
>>>
>>> It is also valid to send a VIRTIO_VSOCK_OP_CREDIT_UPDATE packet without previously receiving a
>>> VIRTIO_VSOCK_OP_CREDIT_REQUEST packet. This allows communicating updates any time a change
>>> in buffer space occurs.
>>> So I guess there is no limitations to send such type of packet, e.g. it is not
>>> required to be a reply for some another packet. Please, correct me if im wrong.
>>>
>>> Thanks, Arseniy
>>
>>
>> Absolutely. My point was different - with this patch it is possible
>> that you are not adding any credits at all since the previous
>> VIRTIO_VSOCK_OP_CREDIT_UPDATE.
>
> I think the problem we're solving here is that since as an optimization we avoid sending the update for every byte we consume, but we put a threshold, then we make sure we update the peer.
>
> A credit update contains a snapshot and sending it the same as the previous one should not create any problem.
>
> My doubt now is that we only do this when we set RCVLOWAT , should we also do something when we consume bytes to avoid the optimization we have?

@Michael, Stefano just reproduced problem during bytes reading, but there is already old fix for this, which we forget to merge:)
I think it must be included to this patchset.

https://lore.kernel.org/netdev/f304eabe-d2ef-11b1-f115-6967632f0339@xxxxxxxxxxxxxx/

Thanks, Arseniy

>
> Stefano
>