[PATCH v6 10/20] locking/mutex: Switch to mutex handoffs for CONFIG_PROXY_EXEC

From: John Stultz
Date: Mon Nov 06 2023 - 14:36:27 EST


From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>

Since with PROXY_EXEC, we will want to hand off locks to the
task's we are running on behalf of, switch to using mutex
handoffs.

Cc: Joel Fernandes <joelaf@xxxxxxxxxx>
Cc: Qais Yousef <qyousef@xxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Juri Lelli <juri.lelli@xxxxxxxxxx>
Cc: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Cc: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
Cc: Valentin Schneider <vschneid@xxxxxxxxxx>
Cc: Steven Rostedt <rostedt@xxxxxxxxxxx>
Cc: Ben Segall <bsegall@xxxxxxxxxx>
Cc: Zimuzo Ezeozue <zezeozue@xxxxxxxxxx>
Cc: Youssef Esmat <youssefesmat@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxx>
Cc: Daniel Bristot de Oliveira <bristot@xxxxxxxxxx>
Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Waiman Long <longman@xxxxxxxxxx>
Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
Cc: "Paul E . McKenney" <paulmck@xxxxxxxxxx>
Cc: kernel-team@xxxxxxxxxxx
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
[rebased, added comments and changelog]
Signed-off-by: Juri Lelli <juri.lelli@xxxxxxxxxx>
[Fixed rebase conflicts]
[squashed sched: Ensure blocked_on is always guarded by blocked_lock]
Signed-off-by: Valentin Schneider <valentin.schneider@xxxxxxx>
[fix rebase conflicts, various fixes & tweaks commented inline]
[squashed sched: Use rq->curr vs rq->proxy checks]
Signed-off-by: Connor O'Brien <connoro@xxxxxxxxxx>
[jstultz: Split out only the very basic initial framework
for proxy logic from a larger patch.]
Signed-off-by: John Stultz <jstultz@xxxxxxxxxx>
---
v5:
* Split out from core proxy patch
v6:
* Rework to use sched_proxy_exec() instead of #ifdef CONFIG_PROXY_EXEC
---
kernel/Kconfig.locks | 2 +-
kernel/locking/mutex.c | 39 ++++++++++++++++++++++-----------------
2 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 4198f0273ecd..791c98f1d329 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -226,7 +226,7 @@ config ARCH_SUPPORTS_ATOMIC_RMW

config MUTEX_SPIN_ON_OWNER
def_bool y
- depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW
+ depends on SMP && ARCH_SUPPORTS_ATOMIC_RMW && !PROXY_EXEC

config RWSEM_SPIN_ON_OWNER
def_bool y
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index f37b7afe8aa5..5394a3c4b5d9 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -914,26 +914,31 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne

mutex_release(&lock->dep_map, ip);

- /*
- * Release the lock before (potentially) taking the spinlock such that
- * other contenders can get on with things ASAP.
- *
- * Except when HANDOFF, in that case we must not clear the owner field,
- * but instead set it to the top waiter.
- */
- owner = atomic_long_read(&lock->owner);
- for (;;) {
- MUTEX_WARN_ON(__owner_task(owner) != current);
- MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP);
-
- if (owner & MUTEX_FLAG_HANDOFF)
- break;
+ if (sched_proxy_exec()) {
+ /* Always force HANDOFF for Proxy Exec for now. Revisit. */
+ owner = MUTEX_FLAG_HANDOFF;
+ } else {
+ /*
+ * Release the lock before (potentially) taking the spinlock
+ * such that other contenders can get on with things ASAP.
+ *
+ * Except when HANDOFF, in that case we must not clear the
+ * owner field, but instead set it to the top waiter.
+ */
+ owner = atomic_long_read(&lock->owner);
+ for (;;) {
+ MUTEX_WARN_ON(__owner_task(owner) != current);
+ MUTEX_WARN_ON(owner & MUTEX_FLAG_PICKUP);

- if (atomic_long_try_cmpxchg_release(&lock->owner, &owner, __owner_flags(owner))) {
- if (owner & MUTEX_FLAG_WAITERS)
+ if (owner & MUTEX_FLAG_HANDOFF)
break;

- return;
+ if (atomic_long_try_cmpxchg_release(&lock->owner, &owner,
+ __owner_flags(owner))) {
+ if (owner & MUTEX_FLAG_WAITERS)
+ break;
+ return;
+ }
}
}

--
2.42.0.869.gea05f2083d-goog