Well, if you can't do that, you can't use virtio-pci on the host.Ok.
You'll need another virtio transport (equivalent to "fake pci" you
mentioned above).
Is there something similar that I can study as an example? Should I look
at virtio-pci?
I think you tried to take two virtio-nets and make them talk together?It *is* possible to make two unmodified virtio-net's talk together. I've
That won't work. You need the code from qemu to talk to virtio-net
config space, and vhost-net to pump the rings.
done it, and it is exactly what the virtio-over-PCI patch does. Study it
and you'll see how I connected the rx/tx queues together.
Please find a name other than virtio-over-PCI since it conflicts withSorry about that. Do you have suggestions for a better name?
virtio-pci. You're tunnelling virtio config cycles (which are usually
done on pci config cycles) on a new protocol which is itself tunnelled
over PCI shared memory.
I called it virtio-over-PCI in my previous postings to LKML, so until a
new patch is written and posted, I'll keep referring to it by the name
used in the past, so people can search for it.
When I post virtio patches, should I CC another mailing list in addition
to LKML?
That said, I'm not sure how qemu-system-ppc running on x86 could
possibly communicate using virtio-net. This would mean the guest is an
emulated big-endian PPC, while the host is a little-endian x86. I
haven't actually tested this situation, so perhaps I am wrong.
The x86 side only needs to run virtio-net, which is present in RHEL 5.3.I can upgrade the kernel to anything I want on both the x86 and ppc's.
You'd only need to run virtio-tunnel or however it's called. All the
eventfd magic takes place on the PCI agents.
I'd like to avoid changing the x86 (RHEL5) userspace, though. On the
ppc's, I have full control over the userspace environment.