RE: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver

From: Liu, Yi L
Date: Thu Jul 11 2019 - 08:27:31 EST


Hi Alex,

> From: kvm-owner@xxxxxxxxxxxxxxx [mailto:kvm-owner@xxxxxxxxxxxxxxx] On Behalf
> Of Alex Williamson
> Sent: Friday, July 5, 2019 11:55 PM
> To: Liu, Yi L <yi.l.liu@xxxxxxxxx>
> Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
>
> On Thu, 4 Jul 2019 09:11:02 +0000
> "Liu, Yi L" <yi.l.liu@xxxxxxxxx> wrote:
>
> > Hi Alex,
> >
> > > From: Alex Williamson [mailto:alex.williamson@xxxxxxxxxx]
> > > Sent: Thursday, July 4, 2019 1:22 AM
> > > To: Liu, Yi L <yi.l.liu@xxxxxxxxx>
> > > Subject: Re: [PATCH v1 9/9] smaples: add vfio-mdev-pci driver
[...]
> >
> > > It's really unfortunate that we don't have the mdev inheriting the
> > > iommu group of the iommu_device so that userspace can really understand
> > > this relationship. A separate group makes sense for the aux-domain
> > > case, and is (I guess) not a significant issue in the case of a
> > > singleton iommu_device group, but it's pretty awkward here. Perhaps
> > > this is something we should correct in design of iommu backed mdevs.
> >
> > Yeah, for aux-domain case, it is not significant issue as aux-domain essentially
> > means singleton iommu_devie group. And in early time, when designing the
> support
> > for wrap pci as a mdev, we also considered to let vfio-mdev-pci to reuse
> > iommu_device group. But this results in an iommu backed group includes mdev and
> > physical devices, which might also be strange. Do you think it is valuable to
> reconsider
> > it?
>
> From a group perspective, the cleanest solution would seem to be that
> IOMMU backed mdevs w/o aux domain support should inherit the IOMMU
> group of the iommu_device,

A confirm here. Regards to inherit the IOMMU group of iommu_device, do
you mean mdev device should be added to the IOMMU group of iommu_device
or maintain a parent and inheritor relationship within vfio? I guess you mean the
later one? :-)

> but I think the barrier here is that we have
> a difficult time determining if the group is "viable" in that case.
> For example a group where one devices is bound to a native host driver
> and the other device bound to a vfio driver would typically be
> considered non-viable as it breaks the isolation guarantees. However

yes, this is how vfio guarantee the isolation before allowing user to further
add a group to a vfio container and so on.

> I think in this configuration, the parent device is effectively
> participating in the isolation and "donating" its iommu group on behalf
> of the mdev device. I don't think we can simultaneously use that iommu
> group for any other purpose.

Agree. At least host cannot make use of the iommu group any more in such
configuration.

> I'm sure we could come up with a way for
> vifo-core to understand this relationship and add it to the white list,

The configuration is host driver still exists while we want to let mdev device
to somehow "own" the iommu backed DMA isolation capability. So one possible
way may be calling vfio_add_group_dev() which will creates a vfio_device instance
for the iommu_device in vfio.c when creating a iommu backed mdev. Then the
iommu group is fairly viable.

> I wonder though how confusing this might be to users who now understand
> the group/driver requirement to be "all endpoints bound to vfio
> drivers". This might still be the best approach regardless of this.

Yes, another thing I'm considering is how to prevent such a host driver from
issuing DMA. If we finally get a device bound to vfio-pci and another device
wrapped as mdev and passthru them to VM, the host driver is still capable to
issue DMA. Though IOMMU can block some DMAs, but not all of them. If a
DMA issued by host driver happens to have mapping in IOMMU side, then
host is kind of doing things on behalf on VM. Though we may trust the host
driver, but it looks to be a little bit awkward to me. :-(

> Thanks,
>
> Alex

Regards,
Yi Liu