[PATCH v4 0/2] Adjust NUMA imbalance for multiple LLCs

From: Mel Gorman
Date: Fri Dec 10 2021 - 04:33:28 EST


Changelog since V3
o Calculate imb_numa_nr for multiple SD_NUMA domains
o Restore behaviour where communicating pairs remain on the same node

Commit 7d2b5dd0bcc4 ("sched/numa: Allow a floating imbalance between NUMA
nodes") allowed an imbalance between NUMA nodes such that communicating
tasks would not be pulled apart by the load balancer. This works fine when
there is a 1:1 relationship between LLC and node but can be suboptimal
for multiple LLCs if independent tasks prematurely use CPUs sharing cache.

The series addresses two problems -- inconsistent use of scheduler domain
weights and sub-optimal performance when there are many LLCs per NUMA node.

include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 ++++++++++++++++---------------
kernel/sched/topology.c | 39 ++++++++++++++++++++++++++++++++++
3 files changed, 59 insertions(+), 17 deletions(-)

--
2.31.1

Mel Gorman (2):
sched/fair: Use weight of SD_NUMA domain in find_busiest_group
sched/fair: Adjust the allowed NUMA imbalance when SD_NUMA spans
multiple LLCs

include/linux/sched/topology.h | 1 +
kernel/sched/fair.c | 36 +++++++++++++++++----------------
kernel/sched/topology.c | 37 ++++++++++++++++++++++++++++++++++
3 files changed, 57 insertions(+), 17 deletions(-)

--
2.31.1