Re: [PATCH v4] perf/arm-dmc620: Fix dmc620_pmu_irqs_lock/cpu_hotplug_lock circular lock dependency

From: Waiman Long
Date: Thu Aug 10 2023 - 11:28:09 EST



On 8/9/23 07:58, Will Deacon wrote:
On Tue, Aug 08, 2023 at 03:10:01PM -0400, Waiman Long wrote:
On 8/8/23 08:29, Robin Murphy wrote:
On 2023-08-07 16:44, Waiman Long wrote:
The following circular locking dependency was reported when running
cpus online/offline test on an arm64 system.

[   84.195923] Chain exists of:
                  dmc620_pmu_irqs_lock --> cpu_hotplug_lock -->
cpuhp_state-down

[   84.207305]  Possible unsafe locking scenario:

[   84.213212]        CPU0                    CPU1
[   84.217729]        ----                    ----
[   84.222247]   lock(cpuhp_state-down);
[   84.225899] lock(cpu_hotplug_lock);
[   84.232068] lock(cpuhp_state-down);
[   84.238237]   lock(dmc620_pmu_irqs_lock);
[   84.242236]
                 *** DEADLOCK ***

The problematic locking order seems to be

    lock(dmc620_pmu_irqs_lock) --> lock(cpu_hotplug_lock)

This locking order happens when dmc620_pmu_get_irq() calls
cpuhp_state_add_instance_nocalls(). Since dmc620_pmu_irqs_lock is used
for protecting the dmc620_pmu_irqs structure only, we don't actually
need
to hold the lock when adding a new instance to the CPU hotplug
subsystem.

Fix this possible deadlock scenario by adding a new
dmc620_pmu_get_irq_lock for protecting the call to
__dmc620_pmu_get_irq()
and taking dmc620_pmu_irqs_lock inside __dmc620_pmu_get_irq()
only when dmc620_pmu_irqs is being searched or modified. As a
result, cpuhp_state_add_instance_nocalls() won't be called with
dmc620_pmu_irqs_lock held and cpu_hotplug_lock won't be acquired after
dmc620_pmu_irqs_lock.

Suggested-by: Robin Murphy <robin.murphy@xxxxxxx>
Signed-off-by: Waiman Long <longman@xxxxxxxxxx>
---
  drivers/perf/arm_dmc620_pmu.c | 18 ++++++++++++++----
  1 file changed, 14 insertions(+), 4 deletions(-)

diff --git a/drivers/perf/arm_dmc620_pmu.c
b/drivers/perf/arm_dmc620_pmu.c
index 9d0f01c4455a..895971915f2d 100644
--- a/drivers/perf/arm_dmc620_pmu.c
+++ b/drivers/perf/arm_dmc620_pmu.c
@@ -68,6 +68,7 @@
    static LIST_HEAD(dmc620_pmu_irqs);
  static DEFINE_MUTEX(dmc620_pmu_irqs_lock);
+static DEFINE_MUTEX(dmc620_pmu_get_irq_lock);
    struct dmc620_pmu_irq {
      struct hlist_node node;
@@ -421,11 +422,18 @@ static irqreturn_t dmc620_pmu_handle_irq(int
irq_num, void *data)
  static struct dmc620_pmu_irq *__dmc620_pmu_get_irq(int irq_num)
  {
      struct dmc620_pmu_irq *irq;
+    bool found = false;
      int ret;
  +    mutex_lock(&dmc620_pmu_irqs_lock);
Do we strictly need this? I'd hope that the outer release/acquire of
dmc620_get_pmu_irqs_lock already means we can't observe an invalid value
of irq->irq_num, and the refcount op should be atomic in itself, no?
Fair enough if there's some other subtlety I'm missing - I do trust that
you're more experienced in locking and barrier semantics than I am! -
and if it comes to it I'd agree that simple extra locking is preferable
to getting into explicit memory barriers here. locking
I guess we can use rcu_read_lock/rcu_read_unlock and
list_for_each_entry_rcu() to avoid taking dmc620_pmu_irqs_lock here.
I thought we decided that we couldn't use RCU in:

https://lore.kernel.org/r/2f56057b-08ef-c3a6-8300-33f36d2c3916@xxxxxxx

?
Right. I am not planning to use RCU anyway.
One other nit either way, could we clarify the names to be something
like irqs_list_lock and irqs_users_lock? The split locking scheme
doesn't exactly lend itself to being super-obvious, especially if we do
end up nesting both locks, so I think naming them after what they
semantically protect seems the most readable option. Otherwise, this
does pretty much look like what I originally had in mind.
I think it is a good to rename dmc620_pmu_irqs_lock to
dmc620_pmu_irqs_list_lock. For the other lock, its purpose is to make sure
that only one user can get to __dmc620_pmu_get_irq(), may be
dmc620_irqs_get_lock. I can add some comment to clarify the nesting
relationship.
Please do that and I'll pick the patch up for 6.6.

Will do.

Cheers,
Longman