[PATCH v3] sched: fix first task of a task group is attached twice

From: Vincent Guittot
Date: Mon May 30 2016 - 11:52:37 EST


The cfs_rq->avg.last_update_time is initialize to 0 with the main effect
that the 1st sched_entity that will be attached, will keep its
last_update_time set to 0 and will attached once again during the
enqueue.
Initialize cfs_rq->avg.last_update_time to 1 instead.

Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
---

v3:
- add initialization of load_last_update_time_copy for not 64bits system
- move init into init_cfs_rq

v2:
- rq_clock_task(rq_of(cfs_rq)) can't be used because lock is not held

kernel/sched/fair.c | 10 ++++++++++
1 file changed, 10 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 218f8e8..86be9c1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8459,6 +8459,16 @@ void init_cfs_rq(struct cfs_rq *cfs_rq)
cfs_rq->min_vruntime_copy = cfs_rq->min_vruntime;
#endif
#ifdef CONFIG_SMP
+ /*
+ * Set last_update_time to something different from 0 to make
+ * sure the 1st sched_entity will not be attached twice: once
+ * when attaching the task to the group and one more time when
+ * enqueueing the task.
+ */
+ cfs_rq->avg.last_update_time = 1;
+#ifndef CONFIG_64BIT
+ cfs_rq->load_last_update_time_copy = 1;
+#endif
atomic_long_set(&cfs_rq->removed_load_avg, 0);
atomic_long_set(&cfs_rq->removed_util_avg, 0);
#endif
--
1.9.1