[PATCH] PCI: hv: Only reuse existing IRTE allocation for Multi-MSI

From: Dexuan Cui
Date: Wed Aug 03 2022 - 22:51:53 EST


Jeffrey's 4 recent patches added Multi-MSI support to the pci-hyperv driver.
Unluckily, one of the patches, i.e., b4b77778ecc5, causes a regression to a
fio test for the Azure VM SKU Standard L64s v2 (64 AMD vCPUs, 8 NVMe drives):

when fio runs against all the 8 NVMe drives, it runs fine with a low io-depth
(e.g., 2 or 4); when fio runs with a high io-depth (e.g., 256), somehow
queue-29 of each NVMe drive suddenly no longer receives any interrupts, and
the NVMe core code has to abort the queue after a timeout of 30 seconds, and
then queue-29 starts to receive interrupts again for several seconds, and
later queue-29 no longer receives interrupts again, and this pattern repeats:

[ 223.891249] nvme nvme2: I/O 320 QID 29 timeout, aborting
[ 223.896231] nvme nvme0: I/O 320 QID 29 timeout, aborting
[ 223.898340] nvme nvme4: I/O 832 QID 29 timeout, aborting
[ 259.471309] nvme nvme2: I/O 320 QID 29 timeout, aborting
[ 259.476493] nvme nvme0: I/O 321 QID 29 timeout, aborting
[ 259.482967] nvme nvme0: I/O 322 QID 29 timeout, aborting

Some other symptoms are: the throughput of the NVMe drives drops due to
commit b4b77778ecc5. When the fio test is running, the kernel prints some
soft lock-up messages from time to time.

Commit b4b77778ecc5 itself looks good, and at the moment it's unclear where
the issue is. While the issue is being investigated, restore the old behavior
in hv_compose_msi_msg(), i.e., don't reuse the existing IRTE allocation for
single-MSI and MSI-X. This is a stopgap for the above NVMe issue.

Fixes: b4b77778ecc5 ("PCI: hv: Reuse existing IRTE allocation in compose_msi_msg()")
Signed-off-by: Dexuan Cui <decui@xxxxxxxxxxxxx>
Cc: Jeffrey Hugo <quic_jhugo@xxxxxxxxxxx>
Cc: Carl Vanderlip <quic_carlv@xxxxxxxxxxx>
---
drivers/pci/controller/pci-hyperv.c | 23 +++++++++++++++++++----
1 file changed, 19 insertions(+), 4 deletions(-)

diff --git a/drivers/pci/controller/pci-hyperv.c b/drivers/pci/controller/pci-hyperv.c
index db814f7b93ba..65d0dab25deb 100644
--- a/drivers/pci/controller/pci-hyperv.c
+++ b/drivers/pci/controller/pci-hyperv.c
@@ -1701,6 +1701,7 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
struct compose_comp_ctxt comp;
struct tran_int_desc *int_desc;
struct msi_desc *msi_desc;
+ bool multi_msi;
u8 vector, vector_count;
struct {
struct pci_packet pci_pkt;
@@ -1714,8 +1715,16 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
u32 size;
int ret;

- /* Reuse the previous allocation */
- if (data->chip_data) {
+ msi_desc = irq_data_get_msi_desc(data);
+ multi_msi = !msi_desc->pci.msi_attrib.is_msix &&
+ msi_desc->nvec_used > 1;
+ /*
+ * Reuse the previous allocation for Multi-MSI. This is required for
+ * Multi-MSI and is optional for single-MSI and MSI-X. Note: for now,
+ * don't reuse the previous allocation for MSI-X because this causes
+ * unreliable interrupt delivery for some NVMe devices.
+ */
+ if (data->chip_data && multi_msi) {
int_desc = data->chip_data;
msg->address_hi = int_desc->address >> 32;
msg->address_lo = int_desc->address & 0xffffffff;
@@ -1723,7 +1732,6 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}

- msi_desc = irq_data_get_msi_desc(data);
pdev = msi_desc_to_pci_dev(msi_desc);
dest = irq_data_get_effective_affinity_mask(data);
pbus = pdev->bus;
@@ -1733,11 +1741,18 @@ static void hv_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
if (!hpdev)
goto return_null_message;

+ /* Free any previous message that might have already been composed. */
+ if (data->chip_data && !multi_msi) {
+ int_desc = data->chip_data;
+ data->chip_data = NULL;
+ hv_int_desc_free(hpdev, int_desc);
+ }
+
int_desc = kzalloc(sizeof(*int_desc), GFP_ATOMIC);
if (!int_desc)
goto drop_reference;

- if (!msi_desc->pci.msi_attrib.is_msix && msi_desc->nvec_used > 1) {
+ if (multi_msi) {
/*
* If this is not the first MSI of Multi MSI, we already have
* a mapping. Can exit early.
--
2.25.1