Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

From: Yu Kuai
Date: Tue Sep 26 2023 - 21:49:12 EST


Hi,

在 2023/09/27 8:15, Song Liu 写道:
On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:

From: Yu Kuai <yukuai3@xxxxxxxxxx>

There are no functional changes, the new helper will still hold
'all_mddevs_lock' after putting mddev, and it will be used to simplify
md_seq_ops.

Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
drivers/md/md.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 10cb4dfbf4ae..a5ef6f7da8ec 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)

static void mddev_delayed_delete(struct work_struct *ws);

-void mddev_put(struct mddev *mddev)
+static void __mddev_put(struct mddev *mddev, bool locked)
{
- if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+ if (locked) {
+ spin_lock(&all_mddevs_lock);
+ if (!atomic_dec_and_test(&mddev->active))
+ return;
+ } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
return;
+

This condition is indeed very confusing. No matter whether we call the
flag "locked" or "do_lock", it is not really accurate.

How about we factor out a helper with the following logic:

if (!mddev->raid_disks && list_empty(&mddev->disks) &&
mddev->ctime == 0 && !mddev->hold_active) {
/* Array is not configured at all, and not held active,
* so destroy it */
set_bit(MD_DELETED, &mddev->flags);

/*
* Call queue_work inside the spinlock so that
* flush_workqueue() after mddev_find will succeed in waiting
* for the work to be done.
*/
queue_work(md_misc_wq, &mddev->del_work);
}

and then use it at the two callers?

Does this make sense?

Yes, that sounds great. I'll do this in v3.

Thanks,
Kuai


Thanks,
Song
.