Re: [PATCH 5/6] sched/fair: Get rid of scaling utilization by capacity_orig

From: Vincent Guittot
Date: Tue Sep 08 2015 - 10:41:08 EST


On 8 September 2015 at 16:35, Morten Rasmussen <morten.rasmussen@xxxxxxx> wrote:
> On Tue, Sep 08, 2015 at 04:06:36PM +0200, Vincent Guittot wrote:
>> On 8 September 2015 at 14:52, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>> > On Tue, Sep 08, 2015 at 02:26:06PM +0200, Peter Zijlstra wrote:
>> >> On Tue, Sep 08, 2015 at 09:22:05AM +0200, Vincent Guittot wrote:
>> >> > No, but
>> >> > sa->util_avg = (sa->util_sum << SCHED_CAPACITY_SHIFT) / LOAD_AVG_MAX;
>> >> > will fix the unit issue.
>> >>
>> >> Tricky that, LOAD_AVG_MAX very much relies on the unit being 1<<10.
>> >>
>> >> And where load_sum already gets a factor 1024 from the weight
>> >> multiplication, util_sum does not get such a factor, and all the scaling
>> >> we do on it loose bits.
>> >>
>> >> So at the moment we go compute the util_avg value, we need to inflate
>> >> util_sum with an extra factor 1024 in order to make it work.
>> >>
>> >> And seeing that we do the shift up on sa->util_sum without consideration
>> >> of overflow, would it not make sense to add that factor before the
>> >> scaling and into the addition?
>> >>
>> >> Now, given all that, units are a complete mess here, and I'd not mind
>> >> something like:
>> >>
>> >> #if (SCHED_LOAD_SHIFT - SCHED_LOAD_RESOLUTION) != SCHED_CAPACITY_SHIFT
>> >> #error "something usefull"
>> >> #endif
>> >>
>> >> somewhere near here.
>> >
>> > Something like teh below..
>> >
>> > Another thing to ponder; the downside of scaled_delta_w is that its
>> > fairly likely delta is small and you loose all bits, whereas the weight
>> > is likely to be large can could loose a fwe bits without issue.
>> >
>> > That is, in fixed point scaling like this, you want to start with the
>> > biggest numbers, not the smallest, otherwise you loose too much.
>> >
>> > The flip side is of course that now you can share a multiplcation.
>> >
>> > --- a/kernel/sched/fair.c
>> > +++ b/kernel/sched/fair.c
>> > @@ -682,7 +682,7 @@ void init_entity_runnable_average(struct
>> > sa->load_avg = scale_load_down(se->load.weight);
>> > sa->load_sum = sa->load_avg * LOAD_AVG_MAX;
>> > sa->util_avg = scale_load_down(SCHED_LOAD_SCALE);
>> > - sa->util_sum = LOAD_AVG_MAX;
>> > + sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
>> > /* when this task enqueue'ed, it will contribute to its cfs_rq's load_avg */
>> > }
>> >
>> > @@ -2515,6 +2515,10 @@ static u32 __compute_runnable_contrib(u6
>> > return contrib + runnable_avg_yN_sum[n];
>> > }
>> >
>> > +#if (SCHED_LOAD_SHIFT - SCHED_LOAD_RESOLUTION) != 10 || SCHED_CAPACITY_SHIFT != 10
>> > +#error "load tracking assumes 2^10 as unit"
>> > +#endif
>>
>> so why don't we set SCHED_CAPACITY_SHIFT to SCHED_LOAD_SHIFT ?
>
> Don't you mean:
>
> #define SCHED_LOAD_SHIFT (SCHED_CAPACITY_SHIFT + SCHED_LOAD_RESOLUTION)

yes you're right

>
> ?
>
> Or do you want to increase the capacity resolution as well if you
> increase the load resolution?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/