[PATCH 0/7] sched: Remove per rq load array

From: Dietmar Eggemann
Date: Mon May 27 2019 - 02:24:46 EST


Since commit fdf5f315d5cf "sched/fair: Disable LB_BIAS by default"
(v4.20) the scheduler feature LB_BIAS is disabled, i.e. the scheduler
has been only using rq->cpu_load[0] for the cpu load values since then.

Tests back then (result are listed in the header of the patch mentioned
above) haven't shown any regressions and people haven't complained about
any related problems in the meantime (v4.20 - v5.1).

The following patches remove all the functionality which is not needed
anymore:

(1) Per rq load array update code
(2) CFS' source_load() and target_load() used for conservative load
balancing which can be directly replaced by weighted_cpuload()
(3) Per rq load array (rq->cpu_load[])
(4) Sched domain per rq load indexes (sd->*_idx) since there is no
other user for it
(5) sum_weighted_load of sched group load balance stats
because it's now identical with the actual sched group load

Dietmar Eggemann (7):
sched: Remove rq->cpu_load[] update code
sched/fair: Replace source_load() & target_load() w/
weighted_cpuload()
sched/debug: Remove sd->*_idx range on sysctl
sched: Remove rq->cpu_load[]
sched: Remove sd->*_idx
sched/fair: Remove sgs->sum_weighted_load
sched/fair: Rename weighted_cpuload() to cpu_load()

include/linux/sched/nohz.h | 8 -
include/linux/sched/topology.h | 5 -
kernel/sched/core.c | 7 +-
kernel/sched/debug.c | 41 +---
kernel/sched/fair.c | 385 ++-------------------------------
kernel/sched/features.h | 1 -
kernel/sched/sched.h | 8 -
kernel/sched/topology.c | 10 -
kernel/time/tick-sched.c | 2 -
9 files changed, 33 insertions(+), 434 deletions(-)

--
2.17.1