[PATCH v2 0/7] iommu/dma: s390 DMA API conversion and optimized IOTLB flushing

From: Niklas Schnelle
Date: Wed Nov 16 2022 - 12:17:25 EST


Hi All,

This patch series converts s390's PCI support from its platform specific DMA
API implementation in arch/s390/pci/pci_dma.c to the common DMA IOMMU layer.
The conversion itself is done in patch 3 and after applying my the s390 IOMMU
improvements series[0]. It only touches the s390 IOMMU driver and arch code
moving over remaining functions from the s390 DMA API implementation. No
changes to common code are necessary. Though patch 4 does propose an additional
common code change to let the iommu.strict kernel parameter override
ops->def_domain_type.

After patch 3 the basic conversion is done and on our partitioning machine
hypervisor LPAR performance matches or exceeds the existing code. When running
under z/VM or KVM however, performance plummets to about half of the existing
code due to a much higher rate of IOTLB flushes for unmapped pages and we still
need to handle out-of-resource indications (patch 7). Due to the hypervisors
use of IOTLB flushes to synchronize their shadow tables these are very
expensive and minimizing them is key for regaining the performance loss.

To this end patches 5-7 propose a new, single queue, IOTLB flushing scheme as
an alternative to the existing per-CPU flush queues. Introducing an alternative
scheme was also suggested by Robin Murphy[1]. In the previous RFC of this
conversion Robin suggested reusing more of the existing queuing logic which
I incorporated into this version. The single queue mode is introduced in patch
5. It allows batching a much larger number of lazily freed IOVAs and was also
chosen as hypervisors tend to serialize IOTLB flushes removing some of the
gains of multiple queues. Except for going from one per-CPU to a global queue
the queue logic remains untouched.

Then patch 6 enables variable queue sizes using power of 2 queue sizes and
shift/mask to keep performance as close to the existing code as possible.
Finally patch 7 allows triggering a queue flush from the IOMMU driver in order
to handle s390's use of an IOTLB flush out-of-resource indication.

As it is implemented in common code the single queue IOTLB flushing scheme can
of course be used by other platforms with expensive IOTLB flushes. Particularly
virtio-iommu might be a candidate. With this series however only s390 systems
that require IOTLB flushes on map default to it while LPAR uses the per-CPU
queues.

I did verify that the new scheme does work on my x86_64 Ryzen workstation by
locally modifying drivers/iommu/iommu.c:iommu_subsys_init() to default to the
single queue mode and verifying its use via "/sys/.../iommu_group/type". I did
not find problems with an AMD GPU, Intel NIC (with SR-IOV), NVMes or any on
board peripherals though the only performance test was a round of CS:Go :-)

As with previous series this is available via my git.kernel.org tree[3] in the
dma_iommu_v2 branch with s390_dma_iommu_v2 tag.

NOTE: Due to the large drop in performance and the issue of out-of-resource
handling we can't merge the DMA API conversion (patches 1-3) until we have
a more suited IOVA flushing scheme with similar improvements as the proposed
changes of patches 5-7.

Best regards,
Niklas

[0] https://lore.kernel.org/linux-iommu/20221109142903.4080275-1-schnelle@xxxxxxxxxxxxx/
[1] https://lore.kernel.org/linux-iommu/3e402947-61f9-b7e8-1414-fde006257b6f@xxxxxxx/
[2] https://lore.kernel.org/linux-iommu/a8e778da-7b41-a6ba-83c3-c366a426c3da@xxxxxxx/
[3] https://git.kernel.org/pub/scm/linux/kernel/git/niks/linux.git/

Changes since RFC v1:
- Patch 1 uses dma_set_mask_and_coherent() (Christoph)
- Patch 3 now documents and allows the use of iommu.strict=0|1 on s390 and
deprecates s390_iommu=strict while making it an alias.
- Patches 5-7 completely reworked to reuse existing queue logic (Robin)
- Added patch 4 to allow using iommu.strict=0|1 to override
ops->def_domain_type.

Niklas Schnelle (7):
s390/ism: Set DMA coherent mask
s390/pci: prepare is_passed_through() for dma-iommu
s390/pci: Use dma-iommu layer
iommu: Let iommu.strict override ops->def_domain_type
iommu/dma: Allow a single FQ in addition to per-CPU FQs
iommu/dma: Enable variable queue size and use larger single queue
iommu/s390: flush queued IOVAs on RPCIT out of resource indication

.../admin-guide/kernel-parameters.txt | 9 +-
arch/s390/include/asm/pci.h | 7 -
arch/s390/include/asm/pci_dma.h | 120 +--
arch/s390/pci/Makefile | 2 +-
arch/s390/pci/pci.c | 22 +-
arch/s390/pci/pci_bus.c | 5 -
arch/s390/pci/pci_debug.c | 13 +-
arch/s390/pci/pci_dma.c | 732 ------------------
arch/s390/pci/pci_event.c | 17 +-
arch/s390/pci/pci_sysfs.c | 19 +-
drivers/iommu/Kconfig | 3 +-
drivers/iommu/dma-iommu.c | 188 ++++-
drivers/iommu/dma-iommu.h | 1 +
drivers/iommu/iommu.c | 18 +-
drivers/iommu/s390-iommu.c | 409 +++++++++-
drivers/s390/net/ism_drv.c | 2 +-
include/linux/iommu.h | 7 +
17 files changed, 603 insertions(+), 971 deletions(-)
delete mode 100644 arch/s390/pci/pci_dma.c

--
2.34.1