[PATCH AUTOSEL 5.16 18/52] sched: Avoid double preemption in __cond_resched_*lock*()

From: Sasha Levin
Date: Thu Feb 03 2022 - 15:32:40 EST


From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>

[ Upstream commit 7e406d1ff39b8ee574036418a5043c86723170cf ]

For PREEMPT/DYNAMIC_PREEMPT the *_unlock() will already trigger a
preemption, no point in then calling preempt_schedule_common()
*again*.

Use _cond_resched() instead, since this is a NOP for the preemptible
configs while it provide a preemption point for the others.

Reported-by: xuhaifeng <xuhaifeng@xxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lkml.kernel.org/r/YcGnvDEYBwOiV0cR@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx>
---
kernel/sched/core.c | 12 +++---------
1 file changed, 3 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 77563109c0ea0..d24823b3c3f9f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8176,9 +8176,7 @@ int __cond_resched_lock(spinlock_t *lock)

if (spin_needbreak(lock) || resched) {
spin_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
spin_lock(lock);
@@ -8196,9 +8194,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)

if (rwlock_needbreak(lock) || resched) {
read_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
read_lock(lock);
@@ -8216,9 +8212,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)

if (rwlock_needbreak(lock) || resched) {
write_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!_cond_resched())
cpu_relax();
ret = 1;
write_lock(lock);
--
2.34.1