[PATCH v3 0/2] sched: Don't trigger misfit if affinity is restricted

From: Qais Yousef
Date: Sun Dec 31 2023 - 12:52:35 EST


Changes since v2:

* Convert access of asym_cap_list to be rcu protected
* Add new patch to sort the list in descending order
* Move some declarations inside affinity check block
* Remove now redundant check against max_cpu_capacity in check_misfit_status()

(thanks Pierre!)

Changes since v1:

* Use asym_cap_list (thanks Dietmar) to iterate instead of iterating
through every cpu which Vincent was concerned about.
* Use uclamped util to compare with capacity instead of util_fits_cpu()
when iterating through capcities (Dietmar).
* Update commit log with test results to better demonstrate the problem

v1 discussion: https://lore.kernel.org/lkml/20230820203429.568884-1-qyousef@xxxxxxxxxxx/#t
v2 discussion: https://lore.kernel.org/lkml/20231212154056.626978-1-qyousef@xxxxxxxxxxx/

Food for thoughts:
------------------

Should misfit cause balance_interval to double? This patch will still be needed
if the answer is yes to avoid unnecessary misfit-lb to trigger repeatedly
anyway.

Should the doubling be made independent of tick value? As it stands 3 failures
for TICK = 1ms will increase it to 8ms. But for 4ms tick this will become 32ms
after 3 failures. Which I think is too high too soon.

Should the balance_interval be capped to something more reasonable? On systems
that require fast response (interactive Desktop for example),
a balance_interval of 64ms and above seem too high.


Thanks!

--
Qais Yousef

Qais Yousef (2):
sched/fair: Check a task has a fitting cpu when updating misfit
sched/topology: Sort asym_cap_list in descending order

kernel/sched/fair.c | 65 ++++++++++++++++++++++++++++++++++-------
kernel/sched/sched.h | 14 +++++++++
kernel/sched/topology.c | 43 +++++++++++++++------------
3 files changed, 94 insertions(+), 28 deletions(-)

--
2.34.1