[RFC 0/1] sched/fair: Consider asymmetric scheduler groups in load balancer

From: Tobias Huschle
Date: Mon May 15 2023 - 07:55:50 EST


The current load balancer implementation implies that scheduler groups,
within the same scheduler domain, all host the same number of CPUs.

This appears to be valid for non-s390 architectures. Nevertheless, s390
can actually have scheduler groups of unequal size.
The current scheduler behavior causes some s390 configs to use SMT
while some cores are still idle, leading to a performance degredation
under certain levels of workload.

Please refer to the patch's commit message for more details and an
example. This patch is a proposal on how to integrate the size of
scheduler groups into the decision process.

This patch is the most basic approach to address this issue and does
not claim to be perfect as-is.

Other ideas that also proved to address the problem but are more
complex but also potentially more precise:
1. On scheduler group building, count the number of CPUs within each
group that are first in their sibling mask. This represents the
number of CPUs that can be used before running into SMT. This
should be slightly more accurate than using the full group weight
if the number of available SMT threads per core varies.
2. Introduce a new scheduler group classification (smt_busy) in
between of fully_busy and has_spare. This classification would
indicate that a group still has spare capacity, but will run
into SMT when using that capacity. This would make the load
balancer prefer groups with fully idle CPUs over ones that are
about to run into SMT.

Feedback would be greatly appreciated.

Tobias Huschle (1):
sched/fair: Consider asymmetric scheduler groups in load balancer

kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

--
2.34.1