RE: [EXTERNAL] Re: [PATCH v4 2/2] arm64: PCI: hv: Add support for Hyper-V vPCI

From: Sunil Muthuswamy
Date: Wed Nov 10 2021 - 17:48:34 EST


> > > On Tue, 09 Nov 2021 22:14:20 +0000,
> > > Sunil Muthuswamy <sunilmut@xxxxxxxxxxxxxxxxxxx> wrote:
> > > >
> > > > From: Sunil Muthuswamy <sunilmut@xxxxxxxxxxxxx>
> > > >
> > > > Add support for Hyper-V vPCI for arm64 by implementing the arch specific
> > > > interfaces. Introduce an IRQ domain and chip specific to Hyper-v vPCI that
> > > > is based on SPIs. The IRQ domain parents itself to the arch GIC IRQ domain
> > > > for basic vector management.
> > > >
> > > > Signed-off-by: Sunil Muthuswamy <sunilmut@xxxxxxxxxxxxx>
> > > > ---
> > > > In v2, v3 & v4:
> > > > Changes are described in the cover letter.
> > > >
> > > > arch/arm64/include/asm/hyperv-tlfs.h | 9 ++
> > > > drivers/pci/Kconfig | 2 +-
> > > > drivers/pci/controller/Kconfig | 2 +-
> > > > drivers/pci/controller/pci-hyperv.c | 207 ++++++++++++++++++++++++++-
> > > > 4 files changed, 217 insertions(+), 3 deletions(-)
> > >
> > > [...]
> > >
> > > > +static int hv_pci_vec_irq_domain_activate(struct irq_domain *domain,
> > > > + struct irq_data *irqd, bool reserve)
> > > > +{
> > > > + static int cpu;
> > > > +
> > > > + /*
> > > > + * Pick a cpu using round-robin as the irq affinity that can be
> > > > + * temporarily used for composing MSI from the hypervisor. GIC
> > > > + * will eventually set the right affinity for the irq and the
> > > > + * 'unmask' will retarget the interrupt to that cpu.
> > > > + */
> > > > + if (cpu >= cpumask_last(cpu_online_mask))
> > > > + cpu = 0;
> > > > + cpu = cpumask_next(cpu, cpu_online_mask);
> > > > + irq_data_update_effective_affinity(irqd, cpumask_of(cpu));
> > >
> > > The mind boggles.
> > >
> > > Let's imagine a single machine. cpu_online_mask only has bit 0 set,
> >
> > single *CPU* machine
> >
> > > and nr_cpumask_bits is 1. This is the first run, and cpu is 1:
> >
> > cpu is *obviously* 0:
> >
> > >
> > > cpu = cpumask_next(cpu, cpu_online_mask);
> > >
> > > cpu is now set to 1. Which is not a valid CPU number, but a valid
> > > return value indicating that there is no next CPU as it is equal to
> > > nr_cpumask_bits. cpumask_of(cpu) will then diligently return crap,
> > > which you carefully store into the irq descriptor. The IRQ subsystem
> > > thanks you.
> > >
> > > The same reasoning applies to any number of CPUs, and you obviously
> > > never checked what any of this does :-(. As to what the behaviour is
> > > when multiple CPUs run this function in parallel, let's not even
> > > bother (locking is overrated).
> > >
> > > Logic and concurrency issues aside, why do you even bother setting
> > > some round-robin affinity if all you need is to set *something* so
> > > that a hypervisor message can be composed? Why not use the first
> > > online CPU? At least it will be correct.
> >
> > Everything else holds.
> >
> > M.
>
> Good call on not being able to pick cpu 0 and that being a problem for
> single cpu system. The cpu initialization should have been '-1' to be able
> to successfully pick cpu 0.
>
> I don't see concurrency an issue because this was a best-case effort to
> randomize the interrupt distribution across cpu's. So, even if two irq's
> ended up with the same cpu, that will still work.
>
> I also had thoughts of just using the first online cpu since this is just
> temporary. So, I will go with that as that will also simplify things. Thanks
> for your feedback.
>
> - Sunil

But, yes, for the concurrency, I do see a possibility of a race condition
with the last cpu check and 'cpumask_next' call where it could lead
to a failure. v5 moves this to the first online cpu and that should
fix this issue.

- Sunil