IRQ affinity problem from virtio_blk

From: Angus Chen
Date: Mon Nov 14 2022 - 22:40:21 EST


Hi All.
I test the linux 6.1 and found the virtio_blk use irq_affinity with IRQD_AFFINITY_MANAGED.
The machine has 80 cpus with two numa node.

Before probe one virtio_blk.
crash_cts> p *vector_matrix
$44 = {
matrix_bits = 256,
alloc_start = 32,
alloc_end = 236,
alloc_size = 204,
global_available = 15354,
global_reserved = 154,
systembits_inalloc = 3,
total_allocated = 411,
online_maps = 80,
maps = 0x46100,
scratch_map = {1160908723191807, 0, 1, 18435222497520517120},
system_map = {1125904739729407, 0, 1, 18435221191850459136}
}
After probe one virtio_blk.
crash_cts> p *vector_matrix
$45 = {
matrix_bits = 256,
alloc_start = 32,
alloc_end = 236,
alloc_size = 204,
global_available = 15273,
global_reserved = 154,
systembits_inalloc = 3,
total_allocated = 413,
online_maps = 80,
maps = 0x46100,
scratch_map = {25769803776, 0, 0, 14680064},
system_map = {1125904739729407, 0, 1, 18435221191850459136}
}

We can see global_available drop from 15354 to 15273, is 81.
And the total_allocated increase from 411 to 413. One config irq,and one vq irq.

It is easy to expend the irq resource ,because virtio_blk device could be more than 512.
And I read the matrix code of irq,with IRQD_AFFINITY_MANAGED be set ,it is a kind of feature.

If we cosume irq exhausted,it will break per_vq_vectors ,so the ' virtblk_map_queues ' will
Fall back to blk_mq_map_queues finally.

Or if we don’t cosume irq exhausted,we just use irq bits of one cpu more than others for example,
IRQD_AFFINITY_MANAGED will fail too,because it not balance.

I'm not a native English speaker, any suggestion will be appreciated.