On Tue, 26 Sep 2023 10:58:26 +0800
Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:
From: Yu Kuai <yukuai3@xxxxxxxxxx>
There are no functional changes, the new helper will still hold
'all_mddevs_lock' after putting mddev, and it will be used to simplify
md_seq_ops.
Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
drivers/md/md.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 10cb4dfbf4ae..a5ef6f7da8ec 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev
*mddev)
static void mddev_delayed_delete(struct work_struct *ws);
-void mddev_put(struct mddev *mddev)
+static void __mddev_put(struct mddev *mddev, bool locked)
{
- if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+ if (locked) {
+ spin_lock(&all_mddevs_lock);
+ if (!atomic_dec_and_test(&mddev->active))
+ return;
It is "locked" and we are taking lock? It seems weird to me. Perhaps "do_lock"
would be better? Do you meant "lockdep_assert_held(&all_mddevs_lock);"
Something is wrong here, we have two paths and in both cases we are
taking lock.
+ } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))As above, I'm not sure if it is correct.
return;
+
if (!mddev->raid_disks && list_empty(&mddev->disks) &&
mddev->ctime == 0 && !mddev->hold_active) {
/* Array is not configured at all, and not held active,
@@ -633,7 +638,14 @@ void mddev_put(struct mddev *mddev)
*/
queue_work(md_misc_wq, &mddev->del_work);
}
- spin_unlock(&all_mddevs_lock);
+
+ if (!locked)
+ spin_unlock(&all_mddevs_lock);
Thanks,
Mariusz
.