Re: [PATCH 1/2 v3] sched: fix find_idlest_group for fork

From: Matt Fleming
Date: Fri Dec 09 2016 - 08:18:10 EST


On Thu, 08 Dec, at 05:56:53PM, Vincent Guittot wrote:
> During fork, the utilization of a task is init once the rq has been
> selected because the current utilization level of the rq is used to set
> the utilization of the fork task. As the task's utilization is still
> null at this step of the fork sequence, it doesn't make sense to look for
> some spare capacity that can fit the task's utilization.
> Furthermore, I can see perf regressions for the test "hackbench -P -g 1"
> because the least loaded policy is always bypassed and tasks are not
> spread during fork.
>
> With this patch and the fix below, we are back to same performances as
> for v4.8. The fix below is only a temporary one used for the test until a
> smarter solution is found because we can't simply remove the test which is
> useful for others benchmarks
>
> @@ -5708,13 +5708,6 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>
> avg_cost = this_sd->avg_scan_cost;
>
> - /*
> - * Due to large variance we need a large fuzz factor; hackbench in
> - * particularly is sensitive here.
> - */
> - if ((avg_idle / 512) < avg_cost)
> - return -1;
> -
> time = local_clock();
>
> for_each_cpu_wrap(cpu, sched_domain_span(sd), target, wrap) {
>
> Signed-off-by: Vincent Guittot <vincent.guittot@xxxxxxxxxx>
> Acked-by: Morten Rasmussen <morten.rasmussen@xxxxxxx>
> ---
> kernel/sched/fair.c | 6 ++++++
> 1 file changed, 6 insertions(+)

Tested-by: Matt Fleming <matt@xxxxxxxxxxxxxxxxxxx>
Reviewed-by: Matt Fleming <matt@xxxxxxxxxxxxxxxxxxx>