Re: [PATCH 09/16] sched/fair: Let asymmetric cpu configurations balance at wake-up

From: Wanpeng Li
Date: Wed May 25 2016 - 07:18:26 EST


2016-05-25 18:54 GMT+08:00 Morten Rasmussen <morten.rasmussen@xxxxxxx>:
> On Wed, May 25, 2016 at 06:29:33PM +0800, Wanpeng Li wrote:
>> 2016-05-25 17:49 GMT+08:00 Morten Rasmussen <morten.rasmussen@xxxxxxx>:
>> > On Wed, May 25, 2016 at 02:57:00PM +0800, Wanpeng Li wrote:
>> >> 2016-05-23 18:58 GMT+08:00 Morten Rasmussen <morten.rasmussen@xxxxxxx>:
>> >> > Currently, SD_WAKE_AFFINE always takes priority over wakeup balancing if
>> >> > SD_BALANCE_WAKE is set on the sched_domains. For asymmetric
>> >> > configurations SD_WAKE_AFFINE is only desirable if the waking task's
>> >> > compute demand (utilization) is suitable for the cpu capacities
>> >> > available within the SD_WAKE_AFFINE sched_domain. If not, let wakeup
>> >> > balancing take over (find_idlest_{group, cpu}()).
>> >> >
>> >> > The assumption is that SD_WAKE_AFFINE is never set for a sched_domain
>> >> > containing cpus with different capacities. This is enforced by a
>> >> > previous patch based on the SD_ASYM_CPUCAPACITY flag.
>> >> >
>> >> > Ideally, we shouldn't set 'want_affine' in the first place, but we don't
>> >> > know if SD_BALANCE_WAKE is enabled on the sched_domain(s) until we start
>> >> > traversing them.
>> >> >
>> >> > cc: Ingo Molnar <mingo@xxxxxxxxxx>
>> >> > cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
>> >> >
>> >> > Signed-off-by: Morten Rasmussen <morten.rasmussen@xxxxxxx>
>> >> > ---
>> >> > kernel/sched/fair.c | 28 +++++++++++++++++++++++++++-
>> >> > 1 file changed, 27 insertions(+), 1 deletion(-)
>> >> >
>> >> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> >> > index 564215d..ce44fa7 100644
>> >> > --- a/kernel/sched/fair.c
>> >> > +++ b/kernel/sched/fair.c
>> >> > @@ -114,6 +114,12 @@ unsigned int __read_mostly sysctl_sched_shares_window = 10000000UL;
>> >> > unsigned int sysctl_sched_cfs_bandwidth_slice = 5000UL;
>> >> > #endif
>> >> >
>> >> > +/*
>> >> > + * The margin used when comparing utilization with cpu capacity:
>> >> > + * util * 1024 < capacity * margin
>> >> > + */
>> >> > +unsigned int capacity_margin = 1280; /* ~20% */
>> >> > +
>> >> > static inline void update_load_add(struct load_weight *lw, unsigned long inc)
>> >> > {
>> >> > lw->weight += inc;
>> >> > @@ -5293,6 +5299,25 @@ static int cpu_util(int cpu)
>> >> > return (util >= capacity) ? capacity : util;
>> >> > }
>> >> >
>> >> > +static inline int task_util(struct task_struct *p)
>> >> > +{
>> >> > + return p->se.avg.util_avg;
>> >> > +}
>> >> > +
>> >> > +static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
>> >> > +{
>> >> > + long delta;
>> >> > + long prev_cap = capacity_of(prev_cpu);
>> >> > +
>> >> > + delta = cpu_rq(cpu)->rd->max_cpu_capacity - prev_cap;
>> >> > +
>> >> > + /* prev_cpu is fairly close to max, no need to abort wake_affine */
>> >> > + if (delta < prev_cap >> 3)
>> >> > + return 0;
>> >> > +
>> >> > + return prev_cap * 1024 < task_util(p) * capacity_margin;
>> >> > +}
>> >>
>> >> If one task util_avg is SCHED_CAPACITY_SCALE and running on x86 box w/
>> >> SMT enabled, then each HT has capacity 589, wake_cap() will result in
>> >> always not wake affine, right?
>> >
>> > The idea is that SMT systems would bail out already at the previous
>> > condition. We should have max_cpu_capacity == prev_cap == 589, delta
>> > should then be zero and make the first condition true and make
>> > wake_cap() always return 0 for any system with symmetric capacities
>> > regardless of their actual capacity values.
>> >
>> > Note that this isn't entirely true as I used capacity_of() for prev_cap,
>> > if I change that to capacity_orig_of() it should be true.
>> >
>> > By making the !wake_cap() condition always true for want_affine, we
>> > should preserve existing behaviour for SMT/SMP. The only overhead is the
>> > capacity delta computation and comparison, which should be cheap.
>> >
>> > Does that make sense?
>>
>> Fair enough, thanks for your explanation.
>>
>> >
>> > Btw, task util_avg == SCHED_CAPACITY_SCALE should only be possible
>> > temporarily, it should decay to util_avg <=
>> > capacity_orig_of(task_cpu(p)) over time. That doesn't affect your
>>
>> Sorry, I didn't find it will decay to capacity_orig in
>> __update_load_avg(), could you elaborate?
>
> I should have checked the code before writing that :-( I thought the
> scaling by arch_scale_cpu_capacity() in __update_load_avg() would do
> that, but it turns out that the default implementation of
> arch_scale_cpu_capacity() doesn't do that when we pass a NULL pointer
> for the sched_domain, it would have returned smt_gain/span_weight ==
> capacity_orig_of(cpu) otherwise.

Thanks for the explanation. :)

Regards,
Wanpeng Li