Re: [PATCH] sched: Introduce scaled capacity awareness in enqueue

From: Rohit Jain
Date: Mon Jun 05 2017 - 14:08:20 EST


On 06/02/2017 11:26 AM, Peter Zijlstra wrote:
On Fri, Jun 02, 2017 at 11:20:20AM -0700, Rohit Jain wrote:
On 06/01/2017 05:37 AM, Peter Zijlstra wrote:
On Thu, Jun 01, 2017 at 02:28:27PM +0200, Peter Zijlstra wrote:
On Wed, May 31, 2017 at 03:19:46PM -0700, Rohit Jain wrote:

2) This scaled capacity is normalized and mapped into buckets.
Why?
And its not at all clear why you'd need
that to begin with.
Here is the problem I am trying to solve:

The benchmark(s) have a high degree of variance when run multiple
times.

We believe it is because of the scheduler not being aware of the scaled
down capacity of the CPUs because of IRQ/RT activity.

This patch helps in solving the above problem. Do you have any thoughts
on solving this problem in any other way?
Why does determining if a CPU's capacity is scaled down need to involve
global data? AFAICT its a purely CPU local affair.

The global array is used to determine the threshold capacity, so
that any CPU which lies below decides that a CPU is 'running low' on
available capacity. This threshold can also be statically defined to
be a fixed fraction, but having dynamic calculation to determine the
threshold works for all benchmarks.

Did you mean we should use a static cutoff and decide whether a CPU
should be treated low on capacity and skip it during idle CPU search?