Re: [PATCH 4/4] sched: bias to target cpu load to reduce task moving

From: Alex Shi
Date: Fri Dec 20 2013 - 09:45:30 EST


On 12/20/2013 07:19 PM, Morten Rasmussen wrote:
>> @@ -4132,10 +4137,10 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p, int this_cpu)
>> >
>> > for_each_cpu(i, sched_group_cpus(group)) {
>> > /* Bias balancing toward cpus of our domain */
>> > - if (local_group)
>> > + if (i == this_cpu)
> What is the motivation for changing the local_group load calculation?
> Now the load contributions of all cpus in the local group, except
> this_cpu, will contribute more as their contribution (this_load) is
> determined using target_load() instead.
>
> If I'm not mistaken, that will lead to more frequent load balancing as
> the local_group bias has been reduced. That is the opposite of your
> intentions based on your comment in target_load().

Good catch. will reconsider this again. :)
>
>> > load = source_load(i);
>> > else
>> > - load = target_load(i);
>> > + load = target_load(i, sd->imbalance_pct);
> You scale by sd->imbalance_pct instead of 100+(sd->imbalance_pct-100)/2
> that you removed above. sd->imbalance_pct may have been arbitrarily
> chosen in the past, but changing it may affect behavior.
>


--
Thanks
Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/