[PATCH] x86/hyperv: Fix IRQ effective cpu discovery for the interrupts unmasking

From: Stanislav Kinsburskii
Date: Fri Apr 14 2023 - 14:05:29 EST


As of today, the existent code uses conjunction of IRQ affinity mask and cpu
online mask to find the cpu id to map an interrupt to.
I looks like the intention was to make sure that and IRQ won't be mapped to an
offline CPU.

Although it works correctly today, there are two problems with it:
1. IRQ affinity mask already consists only of online cpus, thus matching it
to the mask on online cpus is redundant.
2. cpumask_first_and() can return nr_cpu_ids in case of IRQ affinity
containing offline cpus in future, and in this case current implementation
will likely lead to kernel crash in hv_map_interrupt due to an attempt to use
invalid cpu id for getting vp set.

This patch fixes this logic by taking the first bit from the affinity
mask as the cpu to map the IRQ to.
It also adds a paranoia WARN_ON_ONCE for the case when the affinity mask
contains offline cpus.

Signed-off-by: Stanislav Kinsburskii <stanislav.kinsburskii@xxxxxxxxx>
CC: "K. Y. Srinivasan" <kys@xxxxxxxxxxxxx>
CC: Haiyang Zhang <haiyangz@xxxxxxxxxxxxx>
CC: Wei Liu <wei.liu@xxxxxxxxxx>
CC: Dexuan Cui <decui@xxxxxxxxxxxxx>
CC: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
CC: Ingo Molnar <mingo@xxxxxxxxxx>
CC: Borislav Petkov <bp@xxxxxxxxx>
CC: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
CC: x86@xxxxxxxxxx
CC: "H. Peter Anvin" <hpa@xxxxxxxxx>
CC: Joerg Roedel <joro@xxxxxxxxxx>
CC: Will Deacon <will@xxxxxxxxxx>
CC: Robin Murphy <robin.murphy@xxxxxxx>
CC: linux-hyperv@xxxxxxxxxxxxxxx
CC: linux-kernel@xxxxxxxxxxxxxxx
CC: iommu@xxxxxxxxxxxxxxx
---
arch/x86/hyperv/irqdomain.c | 7 ++++---
drivers/iommu/hyperv-iommu.c | 7 ++++---
2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/x86/hyperv/irqdomain.c b/arch/x86/hyperv/irqdomain.c
index 42c70d28ef27..759774b5ab2f 100644
--- a/arch/x86/hyperv/irqdomain.c
+++ b/arch/x86/hyperv/irqdomain.c
@@ -192,7 +192,6 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
struct pci_dev *dev;
struct hv_interrupt_entry out_entry, *stored_entry;
struct irq_cfg *cfg = irqd_cfg(data);
- const cpumask_t *affinity;
int cpu;
u64 status;

@@ -204,8 +203,10 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
return;
}

- affinity = irq_data_get_effective_affinity_mask(data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));

if (data->chip_data) {
/*
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
index 8302db7f783e..632e9c123bbf 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-iommu.c
@@ -197,15 +197,16 @@ hyperv_root_ir_compose_msi_msg(struct irq_data *irq_data, struct msi_msg *msg)
u32 vector;
struct irq_cfg *cfg;
int ioapic_id;
- const struct cpumask *affinity;
int cpu;
struct hv_interrupt_entry entry;
struct hyperv_root_ir_data *data = irq_data->chip_data;
struct IO_APIC_route_entry e;

cfg = irqd_cfg(irq_data);
- affinity = irq_data_get_effective_affinity_mask(irq_data);
- cpu = cpumask_first_and(affinity, cpu_online_mask);
+ cpu = cpumask_first(irq_data_get_effective_affinity_mask(irq_data));
+
+ /* Paranoia check: the cpu must be online */
+ WARN_ON_ONCE(!cpumask_test_cpu(cpu, cpu_online_mask));

vector = cfg->vector;
ioapic_id = data->ioapic_id;