Re: [v4.8-rc1 Regression] sched/fair: Apply more PELT fixes

From: Dietmar Eggemann
Date: Mon Oct 17 2016 - 07:50:23 EST


Hi Vincent,

On 17/10/16 10:09, Vincent Guittot wrote:
> Le Friday 14 Oct 2016 à 12:04:02 (-0400), Joseph Salisbury a écrit :
>> On 10/14/2016 11:18 AM, Vincent Guittot wrote:
>>> Le Friday 14 Oct 2016 à 14:10:07 (+0100), Dietmar Eggemann a écrit :
>>>> On 14/10/16 09:24, Vincent Guittot wrote:

[...]

> Could you try the patch below on top of the faulty kernel ?
>
> ---
> kernel/sched/fair.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8b03fb5..8926685 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2902,7 +2902,8 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
> */
> static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
> {
> - long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> + unsigned long load_avg = READ_ONCE(cfs_rq->avg.load_avg);
> + long delta = load_avg - cfs_rq->tg_load_avg_contrib;
>
> /*
> * No need to update load_avg for root_task_group as it is not used.
> @@ -2912,7 +2913,7 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
>
> if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
> atomic_long_add(delta, &cfs_rq->tg->load_avg);
> - cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
> + cfs_rq->tg_load_avg_contrib = load_avg;
> }
> }

Tested it on an Ubuntu 16.10 Server (on top of the default 4.8.0-22-generic
kernel) on a Lenovo T430 and it didn't help.

What seems to cure it is to get rid of this snippet (part of the commit
mentioned earlier in this thread: 3d30544f0212):

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 039de34f1521..16c692049fbf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -726,7 +726,6 @@ void post_init_entity_util_avg(struct sched_entity *se)
struct sched_avg *sa = &se->avg;
long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
u64 now = cfs_rq_clock_task(cfs_rq);
- int tg_update;

if (cap > 0) {
if (cfs_rq->avg.util_avg != 0) {
@@ -759,10 +758,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
}
}

- tg_update = update_cfs_rq_load_avg(now, cfs_rq, false);
+ update_cfs_rq_load_avg(now, cfs_rq, false);
attach_entity_load_avg(cfs_rq, se);
- if (tg_update)
- update_tg_load_avg(cfs_rq, false);
}

#else /* !CONFIG_SMP */

BTW, I guess we can reach .tg_load_avg up to ~300000-400000 on such a system
initially because systemd will create all ~100 services (and therefore the
corresponding 2. level tg's) at once. In my previous example, there was 500ms
between the creation of 2 tg's so there was a lot of decaying going on in between.