[PATCH] sched: Precise load checking in get_rr_interval_fair

From: Charles Wang
Date: Thu Mar 28 2013 - 09:38:48 EST


From: Charles Wang <muming.wq@xxxxxxxxxx>

Positive load weight of rq.cfs can not represent positive load weight
of se->cfs_rq. And when se->cfs_rq's load is 0, the slice calculated
by sched_slice is not that sensible.

Use se->cfs_rq for load checking instead of rq->cfs. And correct the
comments.

Cc: Ingo Molnar <mingo@xxxxxxx>
Cc: Zhu Yanhai <gaoyang.zyh@xxxxxxxxxx>
Signed-off-by: Charles Wang <muming.wq@xxxxxxxxxx>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 539760e..5d58ac9 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6086,14 +6086,15 @@ void unregister_fair_sched_group(struct task_group *tg, int cpu) { }
static unsigned int get_rr_interval_fair(struct rq *rq, struct task_struct *task)
{
struct sched_entity *se = &task->se;
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
unsigned int rr_interval = 0;

/*
* Time slice is 0 for SCHED_OTHER tasks that are on an otherwise
- * idle runqueue:
+ * idle cfs_rq:
*/
- if (rq->cfs.load.weight)
- rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq_of(se), se));
+ if (cfs_rq->load.weight)
+ rr_interval = NS_TO_JIFFIES(sched_slice(cfs_rq, se));

return rr_interval;
}
--
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/