[PATCH 6/7] sched: consider runnable load average in move_tasks

From: Alex Shi
Date: Mon Dec 03 2012 - 10:00:53 EST


Except using runnable load average in background, move_tasks is also
the key functions in load balance. We need consider the runnable load
average in it in order to the apple to apple load comparison.

Signed-off-by: Alex Shi <alex.shi@xxxxxxxxx>
---
kernel/sched/fair.c | 12 +++++++-----
1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 407ef61..ca0e051 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4121,11 +4121,12 @@ static int tg_load_down(struct task_group *tg, void *data)
long cpu = (long)data;

if (!tg->parent) {
- load = cpu_rq(cpu)->load.weight;
+ load = cpu_rq(cpu)->avg.load_avg_contrib;
} else {
load = tg->parent->cfs_rq[cpu]->h_load;
- load *= tg->se[cpu]->load.weight;
- load /= tg->parent->cfs_rq[cpu]->load.weight + 1;
+ load *= tg->se[cpu]->avg.load_avg_contrib;
+ load /= tg->parent->cfs_rq[cpu]->runnable_load_avg
+ + tg->parent->cfs_rq[cpu]->blocked_load_avg + 1;
}

tg->cfs_rq[cpu]->h_load = load;
@@ -4153,8 +4154,9 @@ static unsigned long task_h_load(struct task_struct *p)
struct cfs_rq *cfs_rq = task_cfs_rq(p);
unsigned long load;

- load = p->se.load.weight;
- load = div_u64(load * cfs_rq->h_load, cfs_rq->load.weight + 1);
+ load = p->se.avg.load_avg_contrib;
+ load = div_u64(load * cfs_rq->h_load,
+ cfs_rq->runnable_load_avg + cfs_rq->blocked_load_avg + 1);

return load;
}
--
1.7.12

--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/