[PATCH 0/6] Enable Cluster Scheduling for x86 Hybrid CPUs

From: Tim Chen
Date: Thu May 04 2023 - 12:11:44 EST


Cluster scheduling domain is not enabled on x86 hybrid CPUs as the logic
is missing to do proper load balancing between a cluster
with SMT CPUs in single core and a cluster with multiple Atom CPUs.

When cluster scheduling was first introduced to x86, it was noticed
that with cluster scheduling on hybrid CPU, single threaded task often
ended up on Atom core (or E-core) instead on idle Big core (or P-core),
resulting in lower performance. Hence cluster scheduling was disabled
on x86 hybrid CPU. (See: https://www.phoronix.com/review/linux-516-regress)

Ricardo recently introduced a patch series that greatly improved
the load balancing logic between P-cores and E-cores on x86 hybrid
CPUs.
https://lore.kernel.org/lkml/20230429153219.GC1495785@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx/T/#m16ebc8de64dbf4c54adebab701b42b47805105f4

However, that patch series is not enough to allow the enabling of cluster
scheduling on hybrid x86 CPUs. This patch series provides some additional
fixes needed for load balancing between cluster sched group consisting
of SMT CPUs of Big cores and cluster sched group consisting of Atom CPUs.
With these patches applied on top of Ricardo's patch series, load is
properly balanced between the P-core and E-core clusters. Idle CPUs
are used in the proper order:

1) SMT CPU on an idle P-core,
2) idle E-core,
3) unused SMT CPU with a busy sibling.

On x86, CPUs in a cluster share L2 cache. Load is now balanced
between the clusters with cluster enabled, for potentially less L2 cache
contention.

I did some experiments on an Alder Lake with 6 P-cores and 8 E-cores,
organized in two clusters of 4 E-core each.

I tested some single threaded benchmarks in Phoronix suite that previously
have shown regressions when cluster scheduling was first enabled. Cluster
scheduling using this patch series performs as well as vanilla kernel.

Single Threaded 6.3-rc5 with cluster Improvement
Benchmark scheduling in Performance
(run-run deviation)
-------------------------------------------------------------------------------------------
tjbench (+/- 0.08%) (+/- 0.23%) -0.23%
PhPbench (+/- 0.31%) (+/- 0.89%) -0.39%
flac (+/- 0.58%) (+/- 0.22%) +0.17%
pybench (+/- 3.16%) (+/- 0.27%) +2.55%

For multi-threaded benchmarks, I tried kernel build and tensor flow lite.
Cluster scheduling did best for the 10 thread case where 6 threads run on
the P-cores, 2 threads on one Atom cluster and 2 threads on the other Atom
cluster. Whereas the vanilla kernel will have 6 threads on the P-cores,
4 threads on one Atom cluster. Though the differences are small and
fall within run variations.

Multi Threaded 6.3-rc5 with cluster Improvement
Benchmark scheduling in Performance
(-#threads) (run-run deviation)
-------------------------------------------------------------------------------------------
Kbuild-8 (+/- 2.90%) (+/- 1.16%) -0.76%
Kbuild-10 (+/- 3.08%) (+/- 3.09%) +0.64%
Kbuild-12 (+/- 3.28%) (+/- 3.55%) +0.91%
Tensor Lite-8 (+/- 4.84%) (+/- 4.61%) -0.23%
Tensor Lite-10 (+/- 0.87%) (+/- 1.45%) +0.47%
Tensor Lite-12 (+/- 1.37%) (+/- 1.04%) -0.12%

Thanks for reviewing these patches.

Tim Chen

Ricardo Neri (1):
sched/fair: Consider the idle state of the whole core for load balance

Tim C Chen (5):
sched/topology: Propagate SMT flags when removing degenerate domain
sched/fair: Check whether active load balance is needed in busiest
group
sched/fair: Fix busiest group selection for asym groups
sched/fair: Skip prefer sibling move between SMT group and non-SMT
group
sched/x86: Add cluster topology to hybrid CPU

arch/x86/kernel/smpboot.c | 3 ++
kernel/sched/fair.c | 78 ++++++++++++++++++++++++++++++++++++++-
kernel/sched/topology.c | 7 +++-
3 files changed, 86 insertions(+), 2 deletions(-)

--
2.32.0