RE: IRQ affinity problem from virtio_blk

From: Angus Chen
Date: Wed Nov 16 2022 - 06:35:02 EST




> -----Original Message-----
> From: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Sent: Wednesday, November 16, 2022 6:56 PM
> To: Angus Chen <angus.chen@xxxxxxxxxxxxxxx>; Michael S. Tsirkin
> <mst@xxxxxxxxxx>
> Cc: linux-kernel@xxxxxxxxxxxxxxx; Ming Lei <ming.lei@xxxxxxxxxx>; Jason
> Wang <jasowang@xxxxxxxxxx>
> Subject: RE: IRQ affinity problem from virtio_blk
>
> On Wed, Nov 16 2022 at 01:02, Angus Chen wrote:
> >> On Wed, Nov 16, 2022 at 12:24:24AM +0100, Thomas Gleixner wrote:
> > Any other information I need to provide,pls tell me.
>
> A sensible use case for 180+ virtio block devices in a single guest.
>
Our card can provide more than 512 virtio_blk devices .
one virtio_blk passthrough to one container,like docker.
So we need so much devices.
In the first patch ,I del the IRQD_AFFINITY_MANAGED in virtio_blk .

As you know, if we just use small queues number ,like 1or 2,we
Still occupy 80 vector ,that is kind of waste,and it is easy to eahausted the
Irq resource.

IRQD_AFFINITY_MANAGED is not the problem,
but many devices use the IRQD_AFFINITY_MANAGED will be problem.

Thanks.

> Thanks,
>
> tglx