[tip: sched/core] sched/fair: Ratelimit update to tg->load_avg

From: tip-bot2 for Aaron Lu
Date: Sun Sep 17 2023 - 06:13:19 EST


The following commit has been merged into the sched/core branch of tip:

Commit-ID: afc1996859a2c26fd1190ec2b9ccf26e700e8ed7
Gitweb: https://git.kernel.org/tip/afc1996859a2c26fd1190ec2b9ccf26e700e8ed7
Author: Aaron Lu <aaron.lu@xxxxxxxxx>
AuthorDate: Tue, 12 Sep 2023 14:58:08 +08:00
Committer: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
CommitterDate: Sat, 16 Sep 2023 11:16:02 +02:00

sched/fair: Ratelimit update to tg->load_avg

When using sysbench to benchmark Postgres in a single docker instance
with sysbench's nr_threads set to nr_cpu, it is observed there are times
update_cfs_group() and update_load_avg() shows noticeable overhead on
a 2sockets/112core/224cpu Intel Sapphire Rapids(SPR):

13.75% 13.74% [kernel.vmlinux] [k] update_cfs_group
10.63% 10.04% [kernel.vmlinux] [k] update_load_avg

Annotate shows the cycles are mostly spent on accessing tg->load_avg
with update_load_avg() being the write side and update_cfs_group() being
the read side. tg->load_avg is per task group and when different tasks
of the same taskgroup running on different CPUs frequently access
tg->load_avg, it can be heavily contended.

E.g. when running postgres_sysbench on a 2sockets/112cores/224cpus Intel
Sappire Rapids, during a 5s window, the wakeup number is 14millions and
migration number is 11millions and with each migration, the task's load
will transfer from src cfs_rq to target cfs_rq and each change involves
an update to tg->load_avg. Since the workload can trigger as many wakeups
and migrations, the access(both read and write) to tg->load_avg can be
unbound. As a result, the two mentioned functions showed noticeable
overhead. With netperf/nr_client=nr_cpu/UDP_RR, the problem is worse:
during a 5s window, wakeup number is 21millions and migration number is
14millions; update_cfs_group() costs ~25% and update_load_avg() costs ~16%.

Reduce the overhead by limiting updates to tg->load_avg to at most once
per ms. The update frequency is a tradeoff between tracking accuracy and
overhead. 1ms is chosen because PELT window is roughly 1ms and it
delivered good results for the tests that I've done. After this change,
the cost of accessing tg->load_avg is greatly reduced and performance
improved. Detailed test results below.

==============================
postgres_sysbench on SPR:
25%
base: 42382±19.8%
patch: 50174±9.5% (noise)

50%
base: 67626±1.3%
patch: 67365±3.1% (noise)

75%
base: 100216±1.2%
patch: 112470±0.1% +12.2%

100%
base: 93671±0.4%
patch: 113563±0.2% +21.2%

==============================
hackbench on ICL:
group=1
base: 114912±5.2%
patch: 117857±2.5% (noise)

group=4
base: 359902±1.6%
patch: 361685±2.7% (noise)

group=8
base: 461070±0.8%
patch: 491713±0.3% +6.6%

group=16
base: 309032±5.0%
patch: 378337±1.3% +22.4%

=============================
hackbench on SPR:
group=1
base: 100768±2.9%
patch: 103134±2.9% (noise)

group=4
base: 413830±12.5%
patch: 378660±16.6% (noise)

group=8
base: 436124±0.6%
patch: 490787±3.2% +12.5%

group=16
base: 457730±3.2%
patch: 680452±1.3% +48.8%

============================
netperf/udp_rr on ICL
25%
base: 114413±0.1%
patch: 115111±0.0% +0.6%

50%
base: 86803±0.5%
patch: 86611±0.0% (noise)

75%
base: 35959±5.3%
patch: 49801±0.6% +38.5%

100%
base: 61951±6.4%
patch: 70224±0.8% +13.4%

===========================
netperf/udp_rr on SPR
25%
base: 104954±1.3%
patch: 107312±2.8% (noise)

50%
base: 55394±4.6%
patch: 54940±7.4% (noise)

75%
base: 13779±3.1%
patch: 36105±1.1% +162%

100%
base: 9703±3.7%
patch: 28011±0.2% +189%

==============================================
netperf/tcp_stream on ICL (all in noise range)
25%
base: 43092±0.1%
patch: 42891±0.5%

50%
base: 19278±14.9%
patch: 22369±7.2%

75%
base: 16822±3.0%
patch: 17086±2.3%

100%
base: 18216±0.6%
patch: 18078±2.9%

===============================================
netperf/tcp_stream on SPR (all in noise range)
25%
base: 34491±0.3%
patch: 34886±0.5%

50%
base: 19278±14.9%
patch: 22369±7.2%

75%
base: 16822±3.0%
patch: 17086±2.3%

100%
base: 18216±0.6%
patch: 18078±2.9%

Reported-by: Nitin Tekchandani <nitin.tekchandani@xxxxxxxxx>
Suggested-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Signed-off-by: Aaron Lu <aaron.lu@xxxxxxxxx>
Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Reviewed-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
Reviewed-by: David Vernet <void@xxxxxxxxxxxxx>
Tested-by: Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx>
Tested-by: Swapnil Sapkal <Swapnil.Sapkal@xxxxxxx>
Link: https://lkml.kernel.org/r/20230912065808.2530-2-aaron.lu@xxxxxxxxx
---
kernel/sched/fair.c | 13 ++++++++++++-
kernel/sched/sched.h | 1 +
2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c893721..d087787 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3876,7 +3876,8 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
*/
static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
{
- long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
+ long delta;
+ u64 now;

/*
* No need to update load_avg for root_task_group as it is not used.
@@ -3884,9 +3885,19 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
if (cfs_rq->tg == &root_task_group)
return;

+ /*
+ * For migration heavy workloads, access to tg->load_avg can be
+ * unbound. Limit the update rate to at most once per ms.
+ */
+ now = sched_clock_cpu(cpu_of(rq_of(cfs_rq)));
+ if (now - cfs_rq->last_update_tg_load_avg < NSEC_PER_MSEC)
+ return;
+
+ delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
atomic_long_add(delta, &cfs_rq->tg->load_avg);
cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
+ cfs_rq->last_update_tg_load_avg = now;
}
}

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 68768f4..887468c 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -594,6 +594,7 @@ struct cfs_rq {
} removed;

#ifdef CONFIG_FAIR_GROUP_SCHED
+ u64 last_update_tg_load_avg;
unsigned long tg_load_avg_contrib;
long propagate;
long prop_runnable_sum;