Re: [PATCH] sched: optimize __cond_resched_lock()

From: Peter Zijlstra
Date: Tue Dec 21 2021 - 05:09:19 EST


On Tue, Dec 21, 2021 at 09:52:28AM +0100, Peter Zijlstra wrote:
> On Tue, Dec 21, 2021 at 03:23:16PM +0800, xuhaifeng wrote:
> > if the kernel is preemptible(CONFIG_PREEMPTION=y), schedule()may be
> > called twice, once via spin_unlock, once via preempt_schedule_common.
> >
> > we can add one conditional, check TIF_NEED_RESCHED flag again,
> > to avoid this.
>
> You can also make it more similar to __cond_resched() instead of making
> it more different.

Bah, sorry, had to wake up first :/

cond_resched_lock still needs to exist for PREEMPT because locks won't
magically release themselves.

Still don't much like the patch though, how's this work for you?

That's arguably the right thing to do work PREEMPT_DYNAMIC too.

---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 83872f95a1ea..79d3d5e15c4c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8192,6 +8192,11 @@ int __sched __cond_resched(void)
return 0;
}
EXPORT_SYMBOL(__cond_resched);
+#else
+static inline int __cond_resched(void)
+{
+ return 0;
+}
#endif

#ifdef CONFIG_PREEMPT_DYNAMIC
@@ -8219,9 +8224,7 @@ int __cond_resched_lock(spinlock_t *lock)

if (spin_needbreak(lock) || resched) {
spin_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!__cond_resched())
cpu_relax();
ret = 1;
spin_lock(lock);
@@ -8239,9 +8242,7 @@ int __cond_resched_rwlock_read(rwlock_t *lock)

if (rwlock_needbreak(lock) || resched) {
read_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!__cond_resched())
cpu_relax();
ret = 1;
read_lock(lock);
@@ -8259,9 +8260,7 @@ int __cond_resched_rwlock_write(rwlock_t *lock)

if (rwlock_needbreak(lock) || resched) {
write_unlock(lock);
- if (resched)
- preempt_schedule_common();
- else
+ if (!__cond_resched())
cpu_relax();
ret = 1;
write_lock(lock);