Re: [PATCH] sched: fix lose fair sleeper bonus in switch_to_fair()

From: Wanpeng Li
Date: Mon Sep 07 2015 - 23:46:39 EST


On 9/7/15 10:02 PM, Peter Zijlstra wrote:
Please always Cc at least the person who wrote the lines you modify.

On Mon, Sep 07, 2015 at 05:45:20PM +0800, Wanpeng Li wrote:
The sleeper task will be normalized when moved from fair_sched_class, in
order that vruntime will be adjusted either the task is running or sleeping
when moved back. The nomalization in switch_to_fair for sleep task will
result in lose fair sleeper bonus in place_entity() once the vruntime -
cfs_rq->min_vruntime is big when moved from fair_sched_class.

This patch fix it by adjusting vruntime just during migrating as original
codes since the vruntime of the task has usually NOT been normalized in
this case.
Sorry, I cannot follow that at all. Maybe its me being sleep deprived,
but could you try that again?

When changing away from the fair class while sleeping, relative vruntime is calculated to handle the case sleep when moved from fair_sched_class and running when moved to fair_sched_class. The absolute vruntime will be calculated in enqueue_entity() either the task is running or sleeping when moved back. The fair sleeper bonus should be gained in place_entity() if the task is still sleeping. However, after recent commit ( 23ec30ddd7c1306: 'sched: add two functions for att(det)aching a task to(from) a cfs_rq'), the absolute vruntime will be calculated in switched_to_fair(), so the max_vruntime() which is called in place_entity() will select the absolute vruntime which is calculated in switched_to_fair() as the se->vruntime and lose the fair sleeper bonus.

Regards,
Wanpeng Li


Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx>
---
kernel/sched/fair.c | 11 +++++++----
1 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d26d3b7..eb9aa35 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8005,9 +8005,6 @@ static void attach_task_cfs_rq(struct task_struct *p)
/* Synchronize task with its cfs_rq */
attach_entity_load_avg(cfs_rq, se);
-
- if (!vruntime_normalized(p))
- se->vruntime += cfs_rq->min_vruntime;
}
static void switched_from_fair(struct rq *rq, struct task_struct *p)
@@ -8066,14 +8063,20 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
#ifdef CONFIG_FAIR_GROUP_SCHED
static void task_move_group_fair(struct task_struct *p)
{
+ struct sched_entity *se = &p->se;
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
detach_task_cfs_rq(p);
set_task_rq(p, task_cpu(p));
#ifdef CONFIG_SMP
/* Tell se's cfs_rq has been changed -- migrated */
- p->se.avg.last_update_time = 0;
+ se->avg.last_update_time = 0;
#endif
attach_task_cfs_rq(p);
+
+ if (!vruntime_normalized(p))
+ se->vruntime += cfs_rq->min_vruntime;
}
void free_fair_sched_group(struct task_group *tg)
--
1.7.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/