Re: [PATCH v2] PCI: Disable Samsung SM951/PM951 NVMe before FLR

From: Alex Williamson
Date: Thu Jul 01 2021 - 17:25:09 EST


On Thu, 1 Jul 2021 15:15:45 -0500
Bjorn Helgaas <helgaas@xxxxxxxxxx> wrote:

> On Thu, Jul 01, 2021 at 08:59:49PM +0100, Christoph Hellwig wrote:
> > On Thu, Jul 01, 2021 at 02:38:56PM -0500, Bjorn Helgaas wrote:
> > > On Fri, Apr 30, 2021 at 06:01:19PM -0500, Robert Straw wrote:
> > > > The SM951/PM951, when used in conjunction with the vfio-pci driver and
> > > > passed to a KVM guest, can exhibit the fatal state addressed by the
> > > > existing `nvme_disable_and_flr` quirk. If the guest cleanly shuts down
> > > > the SSD, and vfio-pci attempts an FLR to the device while it is in this
> > > > state, the nvme driver will fail when it attempts to bind to the device
> > > > after the FLR due to the frozen config area, e.g:
> > > >
> > > > nvme nvme2: frozen state error detected, reset controller
> > > > nvme nvme2: Removing after probe failure status: -12
> > > >
> > > > By including this older model (Samsung 950 PRO) of the controller in the
> > > > existing quirk: the device is able to be cleanly reset after being used
> > > > by a KVM guest.
> > > >
> > > > Signed-off-by: Robert Straw <drbawb@xxxxxxxxxxxxxxx>
> > >
> > > Applied to pci/virtualization for v5.14, thanks!
> >
> > FYI, I really do not like the idea of the PCIe core messing with NVMe
> > registers like this.

What are the specific concerns of PCI-core messing with NVMe registers,
or any other device specific registers?

PCI-core is being told to reset the device, so whether directly or
implicitly, device specific registers will be affected regardless of
how much we directly poke them.

> I hadn't looked at the nvme_disable_and_flr() implementation, but yes,
> I see what you mean, that *is* ugly. I dropped this patch for now.

This attempts to implement the minimum necessary code to disable the
device per the spec, where even though the spec reference isn't the
latest, it should still be applicable to newer devices (I assume the
NVMe standard cares about backwards compatibility).

> I see that you suggested earlier that we not allow these devices to be
> assigned via VFIO [1]. Is that practical? Sounds like it could be
> fairly punitive.

Punitive, yes. Most hardware is broken in one way or another.

> I assume this reset is normally used when vfio-pci is the driver in
> the host kernel and there probably is no guest. In that particular
> case, I'd guess there's no conflict, but as you say, the sysfs reset
> attribute could trigger this reset when there *is* a guest driver, so
> there *would* be a conflict.
>
> Could we coordinate this reset with vfio somehow so we only use
> nvme_disable_and_flr() when there is no guest?

We can trigger a reset via sysfs whether the host driver is vfio-pci or
any other device driver. I don't understand what that has to do with
specifically messing with NVMe registers here. Don't we usually say
that resetting *any* running devices via sysfs is a shoot yourself in
the foot scenario? `echo 0 > enable` would be dis-recommended as well,
or using setpci to manually trigger a reset or poking BAR registers, or
writing garbage to the resource attributes.

vfio tries to make use of this in coordination with userspace
requesting a device reset or to attempt to clear devices state so it's
not leaked between users (more applicable when we're not talking about
mass storage devices). In a VM scenario, that should correspond to
something like a VM reset or poking FLR from the guest.

I think the sysfs reset mechanism used to be more useful for VMs back
in the days of legacy KVM device assignment, when it was usually
libvirt trying to reset a device rather than a host kernel driver like
vfio-pci. I still find it useful for some use cases and it's not like
there aren't plenty of other ways to break your device out from under
the running drivers if you're sufficiently privileged. What's really
the issue here? Thanks,

Alex