Re: [patch 21/32] NTB/msi: Convert to msi_on_each_desc()

From: Dave Jiang
Date: Wed Dec 01 2021 - 11:29:06 EST



On 12/1/2021 3:16 AM, Thomas Gleixner wrote:
Jason,

CC+ IOMMU folks

On Tue, Nov 30 2021 at 20:17, Jason Gunthorpe wrote:
On Tue, Nov 30, 2021 at 10:23:16PM +0100, Thomas Gleixner wrote:
The real problem is where to store the MSI descriptors because the PCI
device has its own real PCI/MSI-X interrupts which means it still shares
the storage space.
Er.. I never realized that just looking at the patches :|

That is relevant to all real "IMS" users. IDXD escaped this because
it, IMHO, wrongly used the mdev with the IRQ layer. The mdev is purely
a messy artifact of VFIO, it should not be required to make the IRQ
layers work.
I don't think it makes sense that the msi_desc would point to a mdev,
the iommu layer consumes the msi_desc_to_dev(), it really should point
to the physical device that originates the message with a proper
iommu ops/data/etc.
Looking at the device slices as subdevices with their own struct device
makes a lot of sense from the conceptual level. That makes is pretty
much obvious to manage the MSIs of those devices at this level like we
do for any other device.

Whether mdev is the right encapsulation for these subdevices is an
orthogonal problem.

I surely agree that msi_desc::dev is an interesting question, but we
already have this disconnect of msi_desc::dev and DMA today due to DMA
aliasing. I haven't looked at that in detail yet, but of course the
alias handling is substantially different accross the various IOMMU
implementations.

Though I fear there is also a use case for MSI-X and IMS tied to the
same device. That network card you are talking about might end up using
MSI-X for a control block and then IMS for the actual network queues
when it is used as physical function device as a whole, but that's
conceptually a different case.

Hi Thomas. This is actually the IDXD usage for a mediated device passed to a guest kernel when we plumb the pass through of IMS to the guest rather than doing previous implementation of having a MSIX vector on guest backed by IMS. The control block for the mediated device is emulated and therefore an emulated MSIX vector will be surfaced as vector 0. However the queues will backed by IMS vectors. So we end up needing MSIX and IMS coexist running on the guest kernel for the same device.

DJ


I'm currently tending to partition the index space in the xarray:

0x00000000 - 0x0000ffff PCI/MSI-X
0x00010000 - 0x0001ffff NTB
It is OK, with some xarray work it can be range allocating & reserving
so that the msi_domain_alloc_irqs() flows can carve out chunks of the
number space..

Another view is the msi_domain_alloc_irqs() flows should have their
own xarrays..
Yes, I was thinking about that as well. The trivial way would be:

struct xarray store[MSI_MAX_STORES];

and then have a store index for each allocation domain. With the
proposed encapsulation of the xarray handling that's definitely
feasible. Whether that buys much is a different question. Let me think
about it some more.

which is feasible now with the range modifications and way simpler to do
with xarray than with the linked list.
Indeed!
I'm glad you like the approach.

Thanks,

tglx