[RFC PATCH] sched/core: Preempt current task in favour of bound kthread

From: Srikar Dronamraju
Date: Fri Dec 06 2019 - 22:20:54 EST


A running task can wake-up a per CPU bound kthread on the same CPU.
If the current running task doesn't yield the CPU before the next load
balance operation, the scheduler would detect load imbalance and try to
balance the load. However this load balance would fail as the waiting
task is CPU bound, while the running task cannot be moved by the regular
load balancer. Finally the active load balancer would kick in and move
the task to a different CPU/Core. Moving the task to a different
CPU/core can lead to loss in cache affinity leading to poor performance.

This is more prone to happen if the current running task is CPU
intensive and the sched_wake_up_granularity is set to larger value.
When the sched_wake_up_granularity was relatively small, it was observed
that the bound thread would complete before the load balancer would have
chosen to move the cache hot task to a different CPU.

To deal with this situation, the current running task would yield to a
per CPU bound kthread, provided kthread is not CPU intensive.

/pboffline/hwcct_prg_old/lib/fsperf -t overwrite --noclean -f 5g -b 4k /pboffline

(With sched_wake_up_granularity set to 15ms)

Performance counter stats for 'system wide' (5 runs):
event v5.4 v5.4 + patch
probe:active_load_balance_cpu_stop 1,740 ( +- 4.06% ) 4 ( +- 15.41% )
sched:sched_waking 431,952 ( +- 0.63% ) 905,001 ( +- 0.25% )
sched:sched_wakeup 431,950 ( +- 0.63% ) 905,000 ( +- 0.25% )
sched:sched_wakeup_new 427 ( +- 14.50% ) 544 ( +- 3.11% )
sched:sched_switch 773,491 ( +- 0.72% ) 1,467,539 ( +- 0.30% )
sched:sched_migrate_task 19,744 ( +- 0.69% ) 2,488 ( +- 5.24% )
sched:sched_process_free 417 ( +- 15.26% ) 545 ( +- 3.47% )
sched:sched_process_exit 433 ( +- 14.71% ) 560 ( +- 3.37% )
sched:sched_wait_task 3 ( +- 23.57% ) 1 ( +- 61.24% )
sched:sched_process_wait 132 ( +- 80.37% ) 848 ( +- 3.63% )
sched:sched_process_fork 427 ( +- 14.50% ) 543 ( +- 3.08% )
sched:sched_process_exec 36 ( +- 92.46% ) 211 ( +- 7.50% )
sched:sched_wake_idle_without_ipi 178,349 ( +- 0.87% ) 351,912 ( +- 0.31% )

elapsed time in seconds 288.09 +- 2.30 ( +- 0.80% ) 72.631 +- 0.109 ( +- 0.15% )

Throughput results

v5.4
Trigger time:................... 0.842679 s (Throughput: 6075.86 MB/s)
Asynchronous submit time:....... 1.0184 s (Throughput: 5027.49 MB/s)
Synchronous submit time:........ 0 s (Throughput: 0 MB/s)
I/O time:....................... 263.17 s (Throughput: 19.455 MB/s)
Ratio trigger time to I/O time:.0.00320202

v5.4 + patch
Trigger time:................... 0.858728 s (Throughput: 5962.3 MB/s)
Asynchronous submit time:....... 0.758399 s (Throughput: 6751.06 MB/s)
Synchronous submit time:........ 0 s (Throughput: 0 MB/s)
I/O time:....................... 43.411 s (Throughput: 117.942 MB/s)
Ratio trigger time to I/O time:.0.0197813

(With sched_wake_up_granularity set to 4ms)

Performance counter stats for 'system wide' (5 runs):
event v5.4 v5.4 + patch
probe:active_load_balance_cpu_stop 4 ( +- 29.92% ) 6 ( +- 21.88% )
sched:sched_waking 896,177 ( +- 0.25% ) 900,352 ( +- 0.36% )
sched:sched_wakeup 896,174 ( +- 0.25% ) 900,352 ( +- 0.36% )
sched:sched_wakeup_new 255 ( +- 40.79% ) 568 ( +- 4.22% )
sched:sched_switch 1,453,937 ( +- 0.27% ) 1,459,653 ( +- 0.46% )
sched:sched_migrate_task 2,318 ( +- 6.55% ) 2,898 ( +- 13.14% )
sched:sched_process_free 239 ( +- 43.14% ) 553 ( +- 4.46% )
sched:sched_process_exit 255 ( +- 40.54% ) 568 ( +- 4.33% )
sched:sched_wait_task 3 ( +- 38.13% ) 2 ( +- 32.39% )
sched:sched_process_wait 257 ( +- 68.90% ) 887 ( +- 4.59% )
sched:sched_process_fork 255 ( +- 40.87% ) 567 ( +- 4.21% )
sched:sched_process_exec 116 ( +- 71.52% ) 214 ( +- 4.51% )
sched:sched_stat_runtime 82,757,021,750 ( +- 2.38% ) 82,092,839,452 ( +- 0.31% )
sched:sched_wake_idle_without_ipi 347,790 ( +- 0.69% ) 350,369 ( +- 0.27% )

elapsed time in seconds 72.6114 +- 0.0516 ( +- 0.07% ) 72.6425 +- 0.0658 ( +- 0.09% )

Signed-off-by: Srikar Dronamraju <srikar@xxxxxxxxxxxxxxxxxx>
---
kernel/sched/core.c | 7 ++++++-
kernel/sched/fair.c | 23 ++++++++++++++++++++++-
kernel/sched/sched.h | 3 ++-
3 files changed, 30 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 44123b4d14e8..efd740aafa17 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2664,7 +2664,12 @@ try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
*/
int wake_up_process(struct task_struct *p)
{
- return try_to_wake_up(p, TASK_NORMAL, 0);
+ int wake_flags = 0;
+
+ if (is_per_cpu_kthread(p))
+ wake_flags = WF_KTHREAD;
+
+ return try_to_wake_up(p, TASK_NORMAL, WF_KTHREAD);
}
EXPORT_SYMBOL(wake_up_process);

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 69a81a5709ff..36486f71e59f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6660,6 +6660,27 @@ static void set_skip_buddy(struct sched_entity *se)
cfs_rq_of(se)->skip = se;
}

+static int kthread_wakeup_preempt(struct rq *rq, struct task_struct *p, int wake_flags)
+{
+ struct task_struct *curr = rq->curr;
+ struct cfs_rq *cfs_rq = task_cfs_rq(curr);
+
+ if (!(wake_flags & WF_KTHREAD))
+ return 0;
+
+ if (p->nr_cpus_allowed != 1 || curr->nr_cpus_allowed == 1)
+ return 0;
+
+ if (cfs_rq->nr_running > 2)
+ return 0;
+
+ /*
+ * Don't preempt, if the waking kthread is more CPU intensive than
+ * the current thread.
+ */
+ return p->nvcsw * curr->nivcsw >= p->nivcsw * curr->nvcsw;
+}
+
/*
* Preempt the current task with a newly woken task if needed:
*/
@@ -6716,7 +6737,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
find_matching_se(&se, &pse);
update_curr(cfs_rq_of(se));
BUG_ON(!pse);
- if (wakeup_preempt_entity(se, pse) == 1) {
+ if (wakeup_preempt_entity(se, pse) == 1 || kthread_wakeup_preempt(rq, p, wake_flags)) {
/*
* Bias pick_next to pick the sched entity that is
* triggering this preemption.
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index c8870c5bd7df..23d4284ad1e3 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1643,7 +1643,8 @@ static inline int task_on_rq_migrating(struct task_struct *p)
*/
#define WF_SYNC 0x01 /* Waker goes to sleep after wakeup */
#define WF_FORK 0x02 /* Child wakeup after fork */
-#define WF_MIGRATED 0x4 /* Internal use, task got migrated */
+#define WF_MIGRATED 0x04 /* Internal use, task got migrated */
+#define WF_KTHREAD 0x08 /* Per CPU Kthread*/

/*
* To aid in avoiding the subversion of "niceness" due to uneven distribution
--
2.18.1