Re: [RFC PATCH 1/5] nvme-pci: add function nvme_submit_vf_cmd to issue admin commands for VF driver.

From: Max Gurtovoy
Date: Sun Dec 11 2022 - 06:40:08 EST



On 12/7/2022 10:08 PM, Jason Gunthorpe wrote:
On Wed, Dec 07, 2022 at 07:33:33PM +0100, Christoph Hellwig wrote:
On Wed, Dec 07, 2022 at 01:31:44PM -0400, Jason Gunthorpe wrote:
Sorry, I meant VF. Your continued using of SR-IOV teminology now keeps
confusing my mind so much that I start mistyping things.
Well, what words do you want to use?
The same I've used through this whole thread: controlling and
controlled function.

So I don't think I've learned anything more about your objection.

"fundamentally broken" doesn't help
The objection is that:

- in hardware fundamentally only the controlling funtion can
control live migration features on the controlled function,
because the controlled function is assigned to a VM which has
control over it.
Yes

However hisilicon managed to do their implementation without this, or
rather you could say their "controlling function" is a single MMIO BAR
page in their PCI VF and their "controlled function" is the rest of
the PCI VF.

- for the same reason there is no portable way to even find
the controlling function from a controlled function, unless
you want to equate PF = controlling and VF = controlled,
and even that breaks down in some corner cases
As you say, the kernel must know the relationship between
controlling->controlled. Nothing else is sane.

If the kernel knows this information then we can find a way for the
vfio_device to have pointers to both controlling and controlled
objects. I have a suggestion below.

- if you want to control live migration from the controlled
VM you need a new vfio subdriver for a function that has
absolutely no new functionality itself related to live
migration (quirks for bugs excepted).
I see it differently, the VFIO driver *is* the live migration
driver. Look at all the drivers that have come through and they are
99% live migration code. They have, IMHO, properly split the live
migration concern out of their controlling/PF driver and placed it in
the "VFIO live migration driver".

We've done a pretty good job of allowing the VFIO live migration
driver to pretty much just focus on live migration stuff and delegate
the VFIO part to library code.

Excepting quirks and bugs sounds nice, except we actually can't ignore
them. Having all the VFIO capabilities is exactly how we are fixing
the quirks and bugs in the first place, and I don't see your vision
how we can continue to do that if we split all the live migration code
into yet another subsystem.

For instance how do I trap FLR like mlx5 must do if the
drivers/live_migration code cannot intercept the FLR VFIO ioctl?

How do I mangle and share the BAR like hisilicon does?

Which is really why this is in VFIO in the first place. It actually is
coupled in practice, if not in theory.

So by this architecture you build a convoluted mess where you need
tons of vfio subdrivers that mostly just talk to the driver for the
controlling function, which they can't even sanely discover. That's
what I keep calling fundamentally broken.
The VFIO live migration drivers will look basically the same if you
put them under drivers/live_migration. This cannot be considered a
"convoluted mess" as splitting things by subsystem is best-practice,
AFAIK.

If we accept that drivers/vfio can be the "live migration subsystem"
then lets go all the way and have the controlling driver to call
vfio_device_group_register() to create the VFIO char device for the
controlled function.

This solves the "sanely discover" problem because of course the
controlling function driver knows what the controlled function is and
it can acquire both functions before it calls
vfio_device_group_register().

This is actually what I want to do anyhow for SIOV-like functions and
VFIO. Doing it for PCI VFs (or related PFs) is very nice symmetry. I
really dislike that our current SRIOV model in Linux forces the VF to
instantly exist without a chance for the controlling driver to
provision it.

We have some challenges on how to do this in the kernel, but I think
we can overcome them. VFIO is ready for this thanks to all the
restructuring work we already did.

I'd really like to get away from VFIO having to do all this crazy
sysfs crap to activate its driver. I think there is a lot of appeal to
having, say, a nvmecli command that just commands the controlling
driver to provision a function, enable live migration, configure it
and then make it visible via VFIO. The same API regardless if the
underlying controlled function technology is PF/VF/SIOV.

At least we have been very interested in doing that for networking
devices.

Jason

Jason/Christoph,

As I mentioned earlier we have 2 orthogonal paths here: implementation and SPEC. They are for some reason mixed in this discussion.

I've tried to understand some SPEC related issues that were raised here, that if we fix them - the implementation will be possible and all the VFIO efforts we did can be re-used.

In high level I think that for the SPEC:

1. Need to define a concept of a "virtual subsystem". A primary controller will be able to create a virtual subsystem. Inside this subsystem the primary controller will be the master ("the controlling") of the migration process. It will also be able to add secondary controllers to this virtual subsystem and assign "virtual controller ID" to it.
something like:
- nvme virtual_subsys_create --dev=/dev/nvme1 --virtual_nqn="my_v_nqn_1" --dev_vcid = 1
- nvme virtual_subsys_add --dev=/dev/nvme1 --virtual_nqn="my_v_nqn_1" --secondary_dev=/dev/nvme2 --secondary_dev_vcid=20

2. Now the primary controller have a list of ctrls inside it's virtual subsystem for migration. It has handle to it that doesn't go away after binding the controlled function to VFIO.

3. Same virtual subsystem should be created in the destination hypervisor.

4. Now, migration process starts using the VFIO uAPI - we will get to a point that VFIO driver of the controlled function needs to ask the controlling function to send admin commands to manage the migration process.
    How to do it ? implementation detail. We can set a pointer in pci_dev/dev structures or we can ask nvme_migration_handle_get(controlled_function) or NVMe can expose API's dedicated to migration nvme_state_save(controlled_function).


When creating a virtual subsystem and adding controllers to it, we can control any leakage or narrow some functionality that make migration impossible. This can be using more admin commands for example.
After the migration process is over, one can remove the secondary controller from the virtual subsystem and re-use it.

WDYT ?