Re: [PATCH 0/6] virtio/vsock: introduce dgrams, sk_buff, and qdisc

From: Stefano Garzarella
Date: Tue Sep 06 2022 - 05:01:20 EST


On Thu, Aug 18, 2022 at 12:28:48PM +0800, Jason Wang wrote:

在 2022/8/17 14:54, Michael S. Tsirkin 写道:
On Mon, Aug 15, 2022 at 10:56:03AM -0700, Bobby Eshleman wrote:
Hey everybody,

This series introduces datagrams, packet scheduling, and sk_buff usage
to virtio vsock.

The usage of struct sk_buff benefits users by a) preparing vsock to use
other related systems that require sk_buff, such as sockmap and qdisc,
b) supporting basic congestion control via sock_alloc_send_skb, and c)
reducing copying when delivering packets to TAP.

The socket layer no longer forces errors to be -ENOMEM, as typically
userspace expects -EAGAIN when the sk_sndbuf threshold is reached and
messages are being sent with option MSG_DONTWAIT.

The datagram work is based off previous patches by Jiang Wang[1].

The introduction of datagrams creates a transport layer fairness issue
where datagrams may freely starve streams of queue access. This happens
because, unlike streams, datagrams lack the transactions necessary for
calculating credits and throttling.

Previous proposals introduce changes to the spec to add an additional
virtqueue pair for datagrams[1]. Although this solution works, using
Linux's qdisc for packet scheduling leverages already existing systems,
avoids the need to change the virtio specification, and gives additional
capabilities. The usage of SFQ or fq_codel, for example, may solve the
transport layer starvation problem. It is easy to imagine other use
cases as well. For example, services of varying importance may be
assigned different priorities, and qdisc will apply appropriate
priority-based scheduling. By default, the system default pfifo qdisc is
used. The qdisc may be bypassed and legacy queuing is resumed by simply
setting the virtio-vsock%d network device to state DOWN. This technique
still allows vsock to work with zero-configuration.
The basic question to answer then is this: with a net device qdisc
etc in the picture, how is this different from virtio net then?
Why do you still want to use vsock?


Or maybe it's time to revisit an old idea[1] to unify at least the driver part (e.g using virtio-net driver for vsock then we can all features that vsock is lacking now)?

Sorry for coming late to the discussion!

This would be great, though, last time I had looked at it, I had found it quite complicated. The main problem is trying to avoid all the net-specific stuff (MTU, ethernet header, HW offloading, etc.).

Maybe we could start thinking about this idea by adding a new transport to vsock (e.g. virtio-net-vsock) completely separate from what we have now.

Thanks,
Stefano