Re: [PATCH v3 0/2] iommu/virtio: Enable IOMMU_CAP_DERRED_FLUSH

From: Jean-Philippe Brucker
Date: Mon Nov 20 2023 - 13:39:25 EST


Hi Niklas,

On Mon, Nov 20, 2023 at 03:51:55PM +0100, Niklas Schnelle wrote:
> Hi All,
>
> Previously I used virtio-iommu as a non-s390x test vehicle[0] for the
> single queue flushing scheme introduced by my s390x DMA API conversion
> series[1]. For this I modified virtio-iommu to a) use .iotlb_sync_map
> and b) enable IOMMU_CAP_DEFERRED_FLUSH. It turned out that deferred
> flush and even just the introduction of ops->iotlb_sync_map yield
> performance uplift[2] even with per-CPU queues. So here is a small
> series of these two changes.
>
> The code is also available on the b4/viommu-deferred-flush branch of my
> kernel.org git repository[3].
>
> Note on testing: I tested this series on my AMD Ryzen 3900X workstation
> using QEMU 8.1.2 a pass-through NVMe and Intel 82599 NIC VFs. For the
> NVMe I saw an increase of about 10% in IOPS and 30% in read bandwidth
> compared with v6.7-rc2. One odd thing though is that QEMU seemed to make
> the entire guest resident/pinned once I passed-through a PCI device.
> I seem to remember this wasn't the case with my last version but not
> sure which QEMU version I used back then.

That's probably expected, now that boot-bypass is enabled by default: on
VM boot, endpoints are able to do DMA to the entire guest-physical address
space, until a virtio-iommu driver disables global bypass in the config
space (at which point the pinned memory is hopefully reclaimed by the
host). QEMU enables it by default to mimic other IOMMU implementations,
and to allow running firmware or OS that don't support virtio-iommu. It
can be disabled with boot-bypass=off

> @Jean-Philippe: I didn't include your R-b's as I changed back to the
> nr_endpoints check and this is like 30% of the patches.

Thank you for the patches. For the series:

Reviewed-by: Jean-Philippe Brucker <jean-philippe@xxxxxxxxxx>