Re: [PATCH -next v7 5/5] md: protect md_thread with rcu

From: Yu Kuai
Date: Tue Apr 25 2023 - 23:57:46 EST


Hi,

在 2023/04/26 11:20, Song Liu 写道:
On Tue, Apr 25, 2023 at 4:54 AM Yu Kuai <yukuai1@xxxxxxxxxxxxxxx> wrote:

From: Yu Kuai <yukuai3@xxxxxxxxxx>

Our test reports a uaf for 'mddev->sync_thread':

T1 T2
md_start_sync
md_register_thread
// mddev->sync_thread is set
raid1d
md_check_recovery
md_reap_sync_thread
md_unregister_thread
kfree

md_wakeup_thread
wake_up
->sync_thread was freed

Root cause is that there is a small windown between register thread and
wake up thread, where the thread can be freed concurrently.

Currently, a global spinlock 'pers_lock' is borrowed to protect
'mddev->thread', this problem can be fixed likewise, however, there are
similar problems elsewhere, and use a global lock for all the cases is
not good.

This patch protect all md_thread with rcu.

Signed-off-by: Yu Kuai <yukuai3@xxxxxxxxxx>
---
block/blk-cgroup.c | 3 ++
drivers/md/md-bitmap.c | 10 ++++--
drivers/md/md-cluster.c | 17 ++++++----
drivers/md/md-multipath.c | 4 +--
drivers/md/md.c | 69 ++++++++++++++++++---------------------
drivers/md/md.h | 8 ++---
drivers/md/raid1.c | 7 ++--
drivers/md/raid1.h | 2 +-
drivers/md/raid10.c | 20 +++++++-----
drivers/md/raid10.h | 2 +-
drivers/md/raid5-cache.c | 22 ++++++++-----
drivers/md/raid5.c | 15 +++++----
drivers/md/raid5.h | 2 +-
13 files changed, 100 insertions(+), 81 deletions(-)

diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index 1c1ebeb51003..0ecb4cce8af2 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -527,6 +527,9 @@ static void blkg_destroy_all(struct gendisk *disk)
list_for_each_entry_safe(blkg, n, &q->blkg_list, q_node) {
struct blkcg *blkcg = blkg->blkcg;

+ if (hlist_unhashed(&blkg->blkcg_node))
+ continue;
+

This change is not related, right?

Yes, it's not related. Sorry that I missed another fix patch into
this...


I don't think we can rush this change in the 6.4 merge window. Let's
test it more thoroughly and ship it in the next merge window.

Of course.

Thanks,
Kuai