Re: [RFC] hypercall-vsock: add a new vsock transport

From: Paraschiv, Andra-Irina
Date: Wed Nov 10 2021 - 16:46:31 EST




On 10/11/2021 13:17, Stefano Garzarella wrote:

On Wed, Nov 10, 2021 at 07:12:36AM +0000, Wang, Wei W wrote:
Hi,

We plan to add a new vsock transport based on hypercall (e.g. vmcall on Intel CPUs).
It transports AF_VSOCK packets between the guest and host, which is similar to
virtio-vsock, vmci-vsock and hyperv-vsock.

Compared to the above listed vsock transports which are designed for high performance,
the main advantages of hypercall-vsock are:

1)       It is VMM agnostic. For example, one guest working on hypercall-vsock can run on

either KVM, Hyperv, or VMware.

2)       It is simpler. It doesn't rely on any complex bus enumeration

(e.g. virtio-pci based vsock device may need the whole implementation of PCI).

An example usage is the communication between MigTD and host (Page 8 at
https://static.sched.com/hosted_files/kvmforum2021/ef/TDX%20Live%20Migration_Wei%20Wang.pdf).
MigTD communicates to host to assist the migration of the target (user)
TD.
MigTD is part of the TCB, so its implementation is expected to be as simple as possible
(e.g. bare mental implementation without OS, no PCI driver support).

Thanks for CC. Mixing both threads.

From Stefan:

"
AF_VSOCK is designed to allow multiple transports, so why not. There is
a cost to developing and maintaining a vsock transport though.

I think Amazon Nitro enclaves use virtio-vsock and I've CCed Andra in
case she has thoughts on the pros/cons and how to minimize the trusted
computing base.

If simplicity is the top priority then VIRTIO's MMIO transport without
indirect descriptors and using the packed virtqueue layout reduces the
size of the implementation:
https://docs.oasis-open.org/virtio/virtio/v1.1/cs01/virtio-v1.1-cs01.html#x1-1440002

Stefan
"


On the Nitro Enclaves project side, virtio-mmio is used for the vsock device setup for the enclave. That has worked fine, it has helped to have an already available implementation (e.g. virtio-mmio / virtio-pci) for adoption and ease of use in different types of setups (e.g. distros, kernel versions).

From Stefano:


Adding Andra and Sergio, because IIRC Firecracker and libkrun emulates
virtio-vsock with virtio-mmio so the implementation should be simple and
also not directly tied to a specific VMM.

Maybe this fit for your use case too, in this way we don't have to
maintain another driver.

Thanks,
Stefano


Indeed, on the Firecracker side, the vsock device is setup using virtio-mmio [1][2][3]. One specific thing is that on the host, instead of using vhost, AF_UNIX sockets are used [4].

Thanks,
Andra

[1] https://github.com/firecracker-microvm/firecracker/blob/main/src/devices/src/virtio/vsock/mod.rs#L30
[2] https://github.com/firecracker-microvm/firecracker/blob/main/src/vmm/src/builder.rs#L936
[3] https://github.com/firecracker-microvm/firecracker/blob/main/src/vmm/src/builder.rs#L859
[4] https://github.com/firecracker-microvm/firecracker/blob/main/docs/vsock.md



Amazon Development Center (Romania) S.R.L. registered office: 27A Sf. Lazar Street, UBC5, floor 2, Iasi, Iasi County, 700045, Romania. Registered in Romania. Registration number J22/2621/2005.