Anthony Liguori wrote:Avi Kivity wrote:
Anthony Liguori wrote:
This is a PCI device that implements a transport for virtio. It allows virtioThis means we can't kick multiple queues with one exit.
devices to be used by QEMU based VMMs like KVM or Xen.
+
+/* the notify function used when creating a virt queue */
+static void vp_notify(struct virtqueue *vq)
+{
+ struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+ struct virtio_pci_vq_info *info = vq->priv;
+
+ /* we write the queue's selector into the notification register to
+ * signal the other end */
+ iowrite16(info->queue_index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_NOTIFY);
+}
There is no interface in virtio currently to batch multiple queue notifications so the only way one could do this AFAICT is to use a timer to delay the notifications. Were you thinking of something else?
No. We can change virtio though, so let's have a flexible ABI.
I'd also like to see a hypercall-capable version of this (but that can wait).
That can be a different device.
That means the user has to select which device to expose. With feature bits, the hypervisor advertises both pio and hypercalls, the guest picks whatever it wants.
I don't think so. A vmexit is required to lower the IRQ line. It may be possible to do something clever like set a shared memory value that's checked on every vmexit. I think it's very unlikely that it's worth it though.
Why so unlikely? Not all workloads will have good batching.
+ return ret;I would really like to see this implemented as pci config space, with no tricks like multiplexing several virtqueues on one register. Something like the PCI BARs where you have all the register numbers allocated statically to queues.
+}
+
+/* the config->find_vq() implementation */
+static struct virtqueue *vp_find_vq(struct virtio_device *vdev, unsigned index,
+ bool (*callback)(struct virtqueue *vq))
+{
+ struct virtio_pci_device *vp_dev = to_vp_device(vdev);
+ struct virtio_pci_vq_info *info;
+ struct virtqueue *vq;
+ int err;
+ u16 num;
+
+ /* Select the queue we're interested in */
+ iowrite16(index, vp_dev->ioaddr + VIRTIO_PCI_QUEUE_SEL);
My first implementation did that. I switched to using a selector because it reduces the amount of PCI config space used and does not limit the number of queues defined by the ABI as much.
But... it's tricky, and it's nonstandard. With pci config, you can do live migration by shipping the pci config space to the other side. With the special iospace, you need to encode/decode it.
Not much of an argument, I know.
wrt. number of queues, 8 queues will consume 32 bytes of pci space if all you store is the ring pfn.