Re: [PATCH 2/2] sched/fair: Simplify util_est

From: Vincent Guittot
Date: Thu Nov 30 2023 - 10:02:20 EST


On Thu, 30 Nov 2023 at 14:42, Dietmar Eggemann <dietmar.eggemann@xxxxxxx> wrote:
>
> On 27/11/2023 15:32, Vincent Guittot wrote:
> > With UTIL_EST_FASTUP now being permanent, we can take advantage of the
> > fact that the ewma jumps directly to a higher utilization at dequeue to
> > simplify util_est and remove the enqueued field.
> >
>
> Did some simple test with a ramp-up/ramp_down (10-80-10%) task affine to
> a CPU.
>
> https://nbviewer.org/github/deggeman/lisa/blob/ipynbs/ipynb/scratchpad/util_est_fastup.ipynb
>
> LGTM.

Thanks

>
> [...]
>
> > @@ -4879,27 +4865,22 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > * Skip update of task's estimated utilization when its members are
> > * already ~1% close to its last activation value.
> > */
> > - last_ewma_diff = ue.enqueued - ue.ewma;
> > - last_enqueued_diff -= ue.enqueued;
> > - if (within_margin(last_ewma_diff, UTIL_EST_MARGIN)) {
> > - if (!within_margin(last_enqueued_diff, UTIL_EST_MARGIN))
> > - goto done;
> > -
> > - return;
> > - }
> > + last_ewma_diff = ewma - dequeued;
> > + if (last_ewma_diff < UTIL_EST_MARGIN)
> > + goto done;
> >
> > /*
> > * To avoid overestimation of actual task utilization, skip updates if
> > * we cannot grant there is idle time in this CPU.
> > */
> > - if (task_util(p) > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
> > + if (dequeued > arch_scale_cpu_capacity(cpu_of(rq_of(cfs_rq))))
> > return;
>
> Not directly related to the changes: Should we not use `goto done` here
> is well to rearm UTIL_AVG_UNCHANGED?

I noticed this as well but didn't want to mix 2 topics.

>
> > /*
> > * To avoid underestimate of task utilization, skip updates of EWMA if
> > * we cannot grant that thread got all CPU time it wanted.
> > */
> > - if ((ue.enqueued + UTIL_EST_MARGIN) < task_runnable(p))
> > + if ((dequeued + UTIL_EST_MARGIN) < task_runnable(p))
> > goto done;
> >
> >
> > @@ -4914,18 +4895,18 @@ static inline void util_est_update(struct cfs_rq *cfs_rq,
> > * ewma(t) = w * task_util(p) + (1-w) * ewma(t-1)
> > * = w * task_util(p) + ewma(t-1) - w * ewma(t-1)
> > * = w * (task_util(p) - ewma(t-1)) + ewma(t-1)
> > - * = w * ( last_ewma_diff ) + ewma(t-1)
> > - * = w * (last_ewma_diff + ewma(t-1) / w)
> > + * = w * ( -last_ewma_diff ) + ewma(t-1)
> > + * = w * (-last_ewma_diff + ewma(t-1) / w)
> > *
> > * Where 'w' is the weight of new samples, which is configured to be
> > * 0.25, thus making w=1/4 ( >>= UTIL_EST_WEIGHT_SHIFT)
> > */
>
> The text above still mentioned ue.enqueued and that we store the current
> PELT value ... which isn't the case anymore.

argh, I missed this one

>
>
> > - ue.ewma <<= UTIL_EST_WEIGHT_SHIFT;
> > - ue.ewma += last_ewma_diff;
> > - ue.ewma >>= UTIL_EST_WEIGHT_SHIFT;
> > + ewma <<= UTIL_EST_WEIGHT_SHIFT;
> > + ewma -= last_ewma_diff;
> > + ewma >>= UTIL_EST_WEIGHT_SHIFT;
> > done:
> > - ue.enqueued |= UTIL_AVG_UNCHANGED;
> > - WRITE_ONCE(p->se.avg.util_est, ue);
> > + ewma |= UTIL_AVG_UNCHANGED;
> > + WRITE_ONCE(p->se.avg.util_est, ewma);
> >
> > trace_sched_util_est_se_tp(&p->se);
> > }
>
> [...]
>
> Reviewed-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>

Thanks