Re: [patch v5 14/15] sched: power aware load balance

From: Preeti U Murthy
Date: Sat Mar 30 2013 - 07:27:59 EST


On 03/29/2013 07:09 PM, Alex Shi wrote:
> On 03/29/2013 08:42 PM, Preeti U Murthy wrote:
>>>> did you try the simplest benchmark: while true; do :; done
>> Yeah I tried out this while true; do :; done benchmark on a vm which ran
>
> Thanks a lot for trying!
>
> What's do you mean 'vm'? Virtual machine?

Yes.

>
>> on 2 socket, 2 cores each socket and 2 threads each core emulation.
>> I ran two instances of this loop with balance policy on, and it was
>> found that there was one instance running on each socket, rather than
>> both instances getting consolidated on one socket.
>>
>> But when I apply the change where we do not consider rq->util if it has
>> no nr_running on the rq,the two instances of the above benchmark get
>> consolidated onto one socket.
>>
>>
>
> I don't know much of virtual machine, guess the unstable VCPU to CPU pin
> cause rq->util keep large? Did you try to pin VCPU to physical CPU?

No I hadn't done any vcpu to cpu pinning but why did the situation
drastically alter to consolidate the load when the rq->util for the
runqueues with 0 tasks on them was not considered as part of sgs->utils?

>
> I still give the rq->util weight even the nr_running is 0, because some
> transitory tasks may actived on the cpu, but just missed on balancing point.
>
> I just wondering that forgetting rq->util when nr_running = 0 is the
> real root cause if your finding is just on VM and without fixed VCPU to
> CPU pin.

I find the same situation on a physical machine too. On a 2 socket, 4
core machine as well. In fact, using trace_printks in the load balancing
part, I could find that the reason that the load was not getting
consolidated onto a socket was because the rq->util of a run-queue with
no processes on it, had not decayed to 0, which is why it would consider
the socket as overloaded and would rule out power aware balancing.All
this was on a physical machine.

Regards
Preeti U Murthy


>
>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/