Re: [RFC/RFT PATCH] scsi: pm8001: Expose HW queues for pm80xx hw

From: John Garry
Date: Mon Jan 11 2021 - 07:07:51 EST


On 11/01/2021 11:57, Jinpu Wang wrote:
Hi John,


On Tue, Jan 5, 2021 at 12:21 PM John Garry <john.garry@xxxxxxxxxx> wrote:

In commit 05c6c029a44d ("scsi: pm80xx: Increase number of supported
queues"), support for 80xx chip was improved by enabling multiple HW
queues.

In this, like other SCSI MQ HBA drivers, the HW queues were not exposed
to upper layer, and instead the driver managed the queues internally.

However, this management duplicates blk-mq code. In addition, the HW queue
management is sub-optimal for a system where the number of CPUs exceeds
the HW queues - this is because queues are selected in a round-robin
fashion, when it would be better to make adjacent CPUs submit on the same
queue. And finally, the affinity of the completion queue interrupts is not
set to mirror the cpu<->HQ queue mapping, which is suboptimal.

As such, for when MSIX is supported, expose HW queues to upper layer. Flag
PCI_IRQ_AFFINITY is set for allocating the MSIX vectors to automatically
assign affinity for the completion queue interrupts.

Signed-off-by: John Garry <john.garry@xxxxxxxxxx>

---
I sent as an RFC/RFT as I have no HW to test. In addition, since HW queue
#0 is used always for internal commands (like in send_task_abort()), if
all CPUs associated with HW queue #0 are offlined, the interrupt for that
queue will be shutdown, and no CPUs would be available to service any
internal commands completion. To solve that, we need [0] merged first and
switch over to use the new API. But we can still test performance in the
meantime.

I assume someone else is making the change to use the request tag for IO
tag management.

[0] https://lore.kernel.org/linux-scsi/47ba045e-a490-198b-1744-529f97192d3b@xxxxxxx/
Thanks for the patch, maybe Viswas can help to test?


That's what I am hoping for :)

Thanks!