Re: [PATCH v6 0/6] iommufd: Add nesting infrastructure (part 2/2)

From: Nicolin Chen
Date: Wed Dec 13 2023 - 15:02:48 EST


On Wed, Dec 13, 2023 at 08:40:55AM -0400, Jason Gunthorpe wrote:
> On Tue, Dec 12, 2023 at 12:05:41PM -0800, Nicolin Chen wrote:
> > > > // iommufd_private.h
> > > >
> > > > enum iommufd_object_type {
> > > > ...
> > > > + IOMMUFD_OBJ_VIOMMU,
> > > > ...
> > > > };
> > > >
> > > > +struct iommufd_viommu {
> > > > + struct iommufd_object obj;
> > > > + struct iommufd_hwpt_paging *hwpt;
> > > > + struct xarray devices;
> > > > +};
> > > >
> > > > struct iommufd_hwpt_paging hwpt {
> > > > ...
> > > > + struct list_head viommu_list;
> > > > ...
> > > > };
> > >
> > > I'd probably first try to go backwards and link the hwpt to the
> > > viommu.
> >
> > I think a VM should have only one hwpt_paging object while one
> > or more viommu objects, so we could do only viommu->hwpt_paging
> > and hwpt_paging->viommu_list. How to go backwards?
>
> That is probably how things would work but I don't know if it makes
> sense to enforce it in the kernel logic..
>
> Point the S2 to a list of viommu objects it is linked to

Hmm, isn't that hwpt_paging->viommu_list already?

> > > The second version maybe we have the xarray, or maybe we just push the
> > > xarray to the eventual viommu series.
> >
> > I think that I still don't get the purpose of the xarray here.
> > It was needed previously because a cache invalidate per hwpt
> > doesn't know which device. Now IOMMUFD_DEV_INVALIDATE knows.
> >
> > Maybe it's related to that narrative "logically we could have
> > multiple mappings per iommufd" that you mentioned above. Mind
> > elaborating a bit?
> >
> > In my mind, viommu is allocated by VMM per piommu, by detecting
> > the piommu_id via hw_info. In that case, viommu can only have
> > one unique device list. If IOMMUFD_DEV_INVALIDATE passes in the
> > dev_id, we don't really need a mapping of vRID-pRID in a multi-
> > viommu case either? In another word, VMM already has a mapping
> > from vRID to dev_id, so it could call the DEV_INVALIDATE ioctl
> > in the first place?
>
> The xarray exists to optimize the invalidation flow.
>
> For SW you can imagine issuing an invalidation against the viommu
> itself and *all* commands, be they ASID or ATC invalidations can be
> processed in one shot. The xarray allows converting the vSID to pSID
> to process ATC invalidations, and the viommu object forces a single
> VMID to handle the ATC invalidations. If we want to do this, I don't
> know.

I drafted some patches with IOMMUFD_DEV_INVALIDATE yesterday,
and realized the same problem that you pointed out here: how
VMM should handle a group of commands interlaced with ASID and
ATC commands. If VMM dispatches commands into two groups, the
executions of the commands will be in a different order than
what the guest kernel issued in. This might be bitter if there
is an error occurring in the middle of command executions, in
which case some later invalidations are done successfully but
the CONS index would have to stop at a command prior.

And even if there are only ATC invalidations in a guest queue,
there's no guarantee that all commands are for the same dev_id,
i.e. ATC invalidations themselves would be dispatched into more
groups and separate IOMMUFD_DEV_INVALIDATE calls.

With the xarray, perhaps we could provide a viommu_id in data
structure of the current iommu_hwpt_invalidate, i.e. reshaping
the existing invalidate uAPI per viommu, so it can be reused by
ATC invalidations too instead of adding IOMMUFD_DEV_INVALIDATE?
Then we wouldn't have the out-of-order execution problem above.

> For HW, the xarray holds the vSID to pSID mapping that must be
> programmed into the HW operating the dedicated invalidation queue.

Ah, right! VCMDQ at least needs that.

Thanks
Nicolin