Re: balance storm

From: Libo Chen
Date: Tue May 27 2014 - 21:06:42 EST


On 2014/5/28 4:53, Thomas Gleixner wrote:
> On Tue, 27 May 2014, Libo Chen wrote:
>> On 2014/5/27 17:55, Mike Galbraith wrote:
>>> On Tue, 2014-05-27 at 15:56 +0800, Libo Chen wrote:
>>>>> On 2014/5/26 22:19, Mike Galbraith wrote:
>>>>>>> On Mon, 2014-05-26 at 20:16 +0800, Libo Chen wrote:
>>>>>>>>> On 2014/5/26 13:11, Mike Galbraith wrote:
>>>>>>>
>>>>>>>>>>> Your synthetic test is the absolute worst case scenario. There has to
>>>>>>>>>>> be work between wakeups for select_idle_sibling() to have any chance
>>>>>>>>>>> whatsoever of turning in a win. At 0 work, it becomes 100% overhead.
>>>>>>>>>
>>>>>>>>> not synthetic, it is a real problem in our product. under no load, waste
>>>>>>>>> much cpu time.
>>>>>>>
>>>>>>> What happens in your product if you apply the commit I pointed out?
>>>>>
>>>>> under no load, cpu usage is up to 60%, but the same apps cost 10% on
>>>>> susp sp1. The apps use a lot of timer.
>>> Something is rotten. 3.14-rt contains that commit, I ran your test with
>>> 256 threads on 64 core box, saw ~4%.
>>>
>>> Putting master/nopreempt config on box and doing the same test, box is
>>> chewing up truckloads of CPU, but not from migrations.
>>>
>>> perf top -g --sort=symbol
>> in my box:
>>
>> perf top -g --sort=symbol
>>
>> Events: 3K cycles
>> 73.27% [k] read_hpet
>
> Why is that machine using read_hpet() ?
>
> Please provide the output of
>
> # dmesg | grep -i tsc
>

Euler:/home # dmesg | grep -i tsc
[ 0.000000] Fast TSC calibration using PIT
[ 0.226921] TSC synchronization [CPU#0 -> CPU#1]:
[ 0.227142] Measured 1053728 cycles TSC warp between CPUs, turning off TSC clock.
[ 0.008000] Marking TSC unstable due to check_tsc_sync_source failed

> and
>
> # cat /sys/devices/system/clocksource/clocksource0/available_clocksource

hpet acpi_pm

>
> and
>
> # cat /sys/devices/system/clocksource/clocksource0/current_clocksource

hpet

>
> Thanks,
>
> tglx
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
> .
>


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/