Re: [PATCH] arm: smp: Avoid false positive CPU hotplug Lockdep-RCU splat

From: Stefan Wiehler
Date: Sat Mar 09 2024 - 02:45:51 EST


> I agree with the problem but disagree with the patch because it feels like a
> terrible workaround.
>
> Can we just use arch_spin_lock() for the cpu_asid_lock? This might require
> acquiring the raw_lock within the raw_spinlock_t, but there is precedent:
>
> arch/powerpc/kvm/book3s_hv_rm_mmu.c:245:
> arch_spin_lock(&kvm->mmu_lock.rlock.raw_lock);
>
> IMO, lockdep tracking of this lock is not necessary or possible considering the
> hotplug situation.
>
> Or is there a reason you need lockdep working for the cpu_asid_lock?

I was not aware of this possibility to bypass lockdep tracing, but this seems
to work and indeed looks like less of a workaround:

diff --git a/arch/arm/mm/context.c b/arch/arm/mm/context.c
index 4204ffa2d104..4fc2c559f1b6 100644
--- a/arch/arm/mm/context.c
+++ b/arch/arm/mm/context.c
@@ -254,7 +254,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)
&& atomic64_xchg(&per_cpu(active_asids, cpu), asid))
goto switch_mm_fastpath;

- raw_spin_lock_irqsave(&cpu_asid_lock, flags);
+ local_irq_save(flags);
+ arch_spin_lock(&cpu_asid_lock.raw_lock);
/* Check that our ASID belongs to the current generation. */
asid = atomic64_read(&mm->context.id);
if ((asid ^ atomic64_read(&asid_generation)) >> ASID_BITS) {
@@ -269,7 +270,8 @@ void check_and_switch_context(struct mm_struct *mm, struct task_struct *tsk)

atomic64_set(&per_cpu(active_asids, cpu), asid);
cpumask_set_cpu(cpu, mm_cpumask(mm));
- raw_spin_unlock_irqrestore(&cpu_asid_lock, flags);
+ arch_spin_unlock(&cpu_asid_lock.raw_lock);
+ local_irq_restore(flags);

switch_mm_fastpath:
cpu_switch_mm(mm->pgd, mm);

@Russell, what do you think?