Re: [PATCH net-next V2 0/8] Packed virtqueue support for vhost

From: Jason Wang
Date: Mon Jul 16 2018 - 20:46:12 EST




On 2018å07æ16æ 20:49, Michael S. Tsirkin wrote:
On Mon, Jul 16, 2018 at 05:46:33PM +0800, Jason Wang wrote:

On 2018å07æ16æ 16:39, Michael S. Tsirkin wrote:
On Mon, Jul 16, 2018 at 11:28:03AM +0800, Jason Wang wrote:
Hi all:

This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/


Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test virtio-net pmd as well when it was
ready.
Well the question then is why we should bother merging this
if this doesn't give a performance gain.
We meet bottlenecks at other places. I can only test Linux driver which has
lots of overheads e.g interrupts. And perf show only a small fraction of
time were spent on e.g virtqueue manipulation. I hope virtio-net pmd can
give us different result but we don't have one ready for testing now. (Jen's
V4 have bugs thus can not work with this series).
Can't linux busy poll?

For vhost busy polling, there's no difference since guest can not give vhost enough stress. For guest busy polling, it does not work for the packets generated by pktgen.

And how about testing loopback with XDP?

No difference, I even shortcut both the tun_get_user() on host and netif_receive_skb() in guest.

Do you see
a gain in CPU utilization maybe?
Unfortunately not.

If not - let's wait for that TCP benchmark result?
We can, but you know TCP_STREAM result is sometime misleading.

A brunch of other patches of mine were rebased on this and then blocked on
this series. Consider we don't meet regression, maybe we can merge this
first and try optimizations or fixups on top?

Thanks
I'm not sure I understand this approach. Packed ring is just an optimization.
What value is there in merging it if it does not help speed?

If you want to support migration from dpdk or vDPA backend. And we still have the chance to see the performance with virito-net pmd in the future. If this does not make sense for you, I will leave this series until we can get results from virtio-net pmd (or find a way that packed virtqueue outperform). And I will start to post other optimizations on vhost.

Thanks


Notes:
- This version depends on Tiwei's series at https://patchwork.ozlabs.org/cover/942297/

This version were tested with:

- Zerocopy (Out of Order) support
- vIOMMU support
- mergeable buffer on/off
- busy polling on/off
- vsock (nc-vsock)

Changes from V1:
- drop uapi patch and use Tiwei's
- split the enablement of packed virtqueue into a separate patch

Changes from RFC V5:

- save unnecessary barriers during vhost_add_used_packed_n()
- more compact math for event idx
- fix failure of SET_VRING_BASE when avail_wrap_counter is true
- fix not copy avail_wrap_counter during GET_VRING_BASE
- introduce SET_VRING_USED_BASE/GET_VRING_USED_BASE for syncing last_used_idx
- rename used_wrap_counter to last_used_wrap_counter
- rebase to net-next

Changes from RFC V4:

- fix signalled_used index recording
- track avail index correctly
- various minor fixes

Changes from RFC V3:

- Fix math on event idx checking
- Sync last avail wrap counter through GET/SET_VRING_BASE
- remove desc_event prefix in the driver/device structure

Changes from RFC V2:

- do not use & in checking desc_event_flags
- off should be most significant bit
- remove the workaround of mergeable buffer for dpdk prototype
- id should be in the last descriptor in the chain
- keep _F_WRITE for write descriptor when adding used
- device flags updating should use ADDR_USED type
- return error on unexpected unavail descriptor in a chain
- return false in vhost_ve_avail_empty is descriptor is available
- track last seen avail_wrap_counter
- correctly examine available descriptor in get_indirect_packed()
- vhost_idx_diff should return u16 instead of bool

Changes from RFC V1:

- Refactor vhost used elem code to avoid open coding on used elem
- Event suppression support (compile test only).
- Indirect descriptor support (compile test only).
- Zerocopy support.
- vIOMMU support.
- SCSI/VSOCK support (compile test only).
- Fix several bugs

Jason Wang (8):
vhost: move get_rx_bufs to vhost.c
vhost: hide used ring layout from device
vhost: do not use vring_used_elem
vhost_net: do not explicitly manipulate vhost_used_elem
vhost: vhost_put_user() can accept metadata type
vhost: packed ring support
vhost: event suppression for packed ring
vhost: enable packed virtqueues

drivers/vhost/net.c | 143 ++-----
drivers/vhost/scsi.c | 62 +--
drivers/vhost/vhost.c | 994 ++++++++++++++++++++++++++++++++++++++++-----
drivers/vhost/vhost.h | 55 ++-
drivers/vhost/vsock.c | 42 +-
include/uapi/linux/vhost.h | 7 +
6 files changed, 1035 insertions(+), 268 deletions(-)

--
2.7.4