Re: [PATCH V2 0/3] Introduce Thermal Pressure

From: Thara Gopinath
Date: Wed Apr 17 2019 - 13:18:25 EST



On 04/17/2019 01:36 AM, Ingo Molnar wrote:
>
> * Thara Gopinath <thara.gopinath@xxxxxxxxxx> wrote:
>
>> The test results below shows 3-5% improvement in performance when
>> using the third solution compared to the default system today where
>> scheduler is unware of cpu capacity limitations due to thermal events.
>
> The numbers look very promising!

Hello Ingo,
Thank you for the review.
>
> I've rearranged the results to make the performance properties of the
> various approaches and parameters easier to see:
>
> (seconds, lower is better)
>
> Hackbench Aobench Dhrystone
> ========= ======= =========
> Vanilla kernel (No Thermal Pressure) 10.21 141.58 1.14
> Instantaneous thermal pressure 10.16 141.63 1.15
> Thermal Pressure Averaging:
> - PELT fmwk 9.88 134.48 1.19
> - non-PELT Algo. Decay : 500 ms 9.94 133.62 1.09
> - non-PELT Algo. Decay : 250 ms 7.52 137.22 1.012
> - non-PELT Algo. Decay : 125 ms 9.87 137.55 1.12
>
>
> Firstly, a couple of questions about the numbers:
>
> 1)
>
> Is the 1.012 result for "non-PELT 250 msecs Dhrystone" really 1.012?
> You reported it as:
>
> non-PELT Algo. Decay : 250 ms 1.012 7.02%

It is indeed 1.012. So, I ran the "non-PELT Algo 250 ms" benchmarks
multiple time because of the anomalies noticed. 1.012 is a formatting
error on my part when I copy pasted the results into a google sheet I am
maintaining to capture the test results. Sorry about the confusion.
>
> But the formatting is significant 3 digits versus only two for all
> the other results.
>
> 2)
>
> You reported the hackbench numbers with "10 runs" - did the other
> benchmarks use 10 runs as well? Maybe you used fewer runs for the
> longest benchmark, Aobench?
Hackbench and dhrystone are 10 runs each. Aobench is part of phoronix
test suit and the test suite runs it six times and gives the per run
results, mean and stddev. On my part, I ran aobench just once per
configuration.

>
> Secondly, it appears the non-PELT decaying average is the best approach,
> but the results are a bit coarse around the ~250 msecs peak. Maybe it
> would be good to measure it in 50 msecs steps between 50 msecs and 1000
> msecs - but only if it can be scripted sanely:

non-PELT looks better overall because the test results are quite
comparable (if not better) between the two solutions and it takes care
of concerns people raised when I posted V1 using PELT-fmwk algo
regarding reuse of utilization signal to track thermal pressure.

Regarding the decay period, I agree that more testing can be done. I
like your suggestions below and I am going to try implementing them
sometime next week. Once I have some solid results, I will send them out.

My concern regarding getting hung up too much on decay period is that I
think it could vary from SoC to SoC depending on the type and number of
cores and thermal characteristics. So I was thinking eventually the
decay period should be configurable via a config option or by any other
means. Testing on different systems will definitely help and maybe I am
wrong and there is no much variation between systems.

Regards
Thara

>
> A possible approach would be to add a debug sysctl for the tuning period,
> and script all these benchmark runs and the printing of the results. You
> could add another (debug) sysctl to turn the 'instant' logic on, and to
> restore vanilla kernel behavior as well - this makes it all much easier
> to script and measure with a single kernel image, without having to
> reboot the kernel. The sysctl overhead will not be measurable for
> workloads like this.
>
> Then you can use "perf stat --null --table" to measure runtime and stddev
> easily and with a single tool, for example:
>
> dagon:~> perf stat --null --sync --repeat 10 --table ./hackbench 20 >benchmark.out
>
> Performance counter stats for './hackbench 20' (10 runs):
>
> # Table of individual measurements:
> 0.15246 (-0.03960) ######
> 0.20832 (+0.01627) ##
> 0.17895 (-0.01310) ##
> 0.19791 (+0.00585) #
> 0.19209 (+0.00004) #
> 0.19406 (+0.00201) #
> 0.22484 (+0.03278) ###
> 0.18695 (-0.00511) #
> 0.19032 (-0.00174) #
> 0.19464 (+0.00259) #
>
> # Final result:
> 0.19205 +- 0.00592 seconds time elapsed ( +- 3.08% )
>
> Note how all the individual measurements can be captured this way,
> without seeing the benchmark output itself. So difference benchmarks can
> be measured this way, assuming they don't have too long setup time.
>
> Thanks,
>
> Ingo
>


--
Regards
Thara