[tip: locking/core] preempt: Adjust PREEMPT_LOCK_OFFSET for RT

From: tip-bot2 for Thomas Gleixner
Date: Tue Aug 17 2021 - 16:14:11 EST


The following commit has been merged into the locking/core branch of tip:

Commit-ID: 015680aa4c5d784513d0a9728bc52ec7c4a64227
Gitweb: https://git.kernel.org/tip/015680aa4c5d784513d0a9728bc52ec7c4a64227
Author: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
AuthorDate: Sun, 15 Aug 2021 23:29:22 +02:00
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitterDate: Tue, 17 Aug 2021 19:06:04 +02:00

preempt: Adjust PREEMPT_LOCK_OFFSET for RT

On PREEMPT_RT regular spinlocks and rwlocks are substituted with rtmutex
based constructs. spin/rwlock held regions are preemptible on PREEMPT_RT,
so PREEMPT_LOCK_OFFSET has to be 0 to make the various cond_resched_*lock()
functions work correctly.

Signed-off-by: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20210815211305.804246275@xxxxxxxxxxxxx
---
include/linux/preempt.h | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/include/linux/preempt.h b/include/linux/preempt.h
index 9881eac..4d244e2 100644
--- a/include/linux/preempt.h
+++ b/include/linux/preempt.h
@@ -121,7 +121,11 @@
/*
* The preempt_count offset after spin_lock()
*/
+#if !defined(CONFIG_PREEMPT_RT)
#define PREEMPT_LOCK_OFFSET PREEMPT_DISABLE_OFFSET
+#else
+#define PREEMPT_LOCK_OFFSET 0
+#endif

/*
* The preempt_count offset needed for things like: