[PATCH 4/7] sched/fair: Clean up the logic in fix_small_imbalance()

From: Dietmar Eggemann
Date: Fri Apr 29 2016 - 15:34:04 EST


Avoid the need to add scaled_busy_load_per_task on both sides of the if
condition to determine whether imbalance has to be set to
busiest->load_per_task or not.

The imbn variable was introduced with commit 2dd73a4f09be ("[PATCH]
sched: implement smpnice") and the original if condition was

if (max_load - this_load >= busiest_load_per_task * imbn)

which over time changed into the current version where
scaled_busy_load_per_task is to be found on both sides of
the if condition.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
---

The original smpnice implementation sets imbalance to the avg task
load of the busiest sched group (sg) if the difference between the avg
load of the busiest sg and the local sg is greater or equal 2 times
(imbn) the avg task load of the busiest sg if there is no task running
in the local sg or the avg task load of the busiest sg is smaller or
equal the avg task load of the local sg. Otherwise the imbn factor is
lowered to 1.

imbn set to 2 makes sense when all the tasks have the same priority so
in case there are n tasks on local sd there have to be n+2 tasks on the
busiest sg to give load balance the chance to pull over one task.

imbn set to 1 makes sense when the avg task load of the busiest sg is
greater than the one of the local sg since there could be at least one
task with a smaller load on the busiest sg which can be pulled over to
the local sg.

The current version lowered imbn effectively by one, so we use 1 instead
of 2 in case the avg task load of sg_busiest isn't greater than the one
of sg_local and 0 instead of 1 in the other case. This behaviour is not in
sync with the explanation above on why we set imbn to certain values.

kernel/sched/fair.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c066574cff04..dc4828bbe50d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6915,7 +6915,7 @@ static inline
void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
{
unsigned long tmp, capa_now = 0, capa_move = 0;
- unsigned int imbn = 2;
+ unsigned int imbn = 1;
unsigned long scaled_busy_load_per_task;
struct sg_lb_stats *local, *busiest;

@@ -6925,13 +6925,13 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
if (!local->sum_nr_running)
local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
else if (busiest->load_per_task > local->load_per_task)
- imbn = 1;
+ imbn = 0;

scaled_busy_load_per_task =
(busiest->load_per_task * SCHED_CAPACITY_SCALE) /
busiest->group_capacity;

- if (busiest->avg_load + scaled_busy_load_per_task >=
+ if (busiest->avg_load >=
local->avg_load + (scaled_busy_load_per_task * imbn)) {
env->imbalance = busiest->load_per_task;
return;
--
1.9.1