[RFCv7 PATCH 06/10] sched/fair: cpufreq_sched triggers for load balancing

From: Steve Muckle
Date: Mon Feb 22 2016 - 20:24:01 EST


From: Juri Lelli <juri.lelli@xxxxxxx>

As we don't trigger freq changes from {en,de}queue_task_fair() during load
balancing, we need to do explicitly so on load balancing paths.

[smuckle@xxxxxxxxxx: move update_capacity_of calls so rq lock is held]

cc: Ingo Molnar <mingo@xxxxxxxxxx>
cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Signed-off-by: Juri Lelli <juri.lelli@xxxxxxx>
Signed-off-by: Steve Muckle <smuckle@xxxxxxxxxx>
---
kernel/sched/fair.c | 21 ++++++++++++++++++++-
1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7fab8f..5531513 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6107,6 +6107,10 @@ static void attach_one_task(struct rq *rq, struct task_struct *p)
{
raw_spin_lock(&rq->lock);
attach_task(rq, p);
+ /*
+ * We want to potentially raise target_cpu's OPP.
+ */
+ update_capacity_of(cpu_of(rq));
raw_spin_unlock(&rq->lock);
}

@@ -6128,6 +6132,11 @@ static void attach_tasks(struct lb_env *env)
attach_task(env->dst_rq, p);
}

+ /*
+ * We want to potentially raise env.dst_cpu's OPP.
+ */
+ update_capacity_of(env->dst_cpu);
+
raw_spin_unlock(&env->dst_rq->lock);
}

@@ -7267,6 +7276,11 @@ more_balance:
* ld_moved - cumulative load moved across iterations
*/
cur_ld_moved = detach_tasks(&env);
+ /*
+ * We want to potentially lower env.src_cpu's OPP.
+ */
+ if (cur_ld_moved)
+ update_capacity_of(env.src_cpu);

/*
* We've detached some tasks from busiest_rq. Every
@@ -7631,8 +7645,13 @@ static int active_load_balance_cpu_stop(void *data)
schedstat_inc(sd, alb_count);

p = detach_one_task(&env);
- if (p)
+ if (p) {
schedstat_inc(sd, alb_pushed);
+ /*
+ * We want to potentially lower env.src_cpu's OPP.
+ */
+ update_capacity_of(env.src_cpu);
+ }
else
schedstat_inc(sd, alb_failed);
}
--
2.4.10