Re: [PATCH V2 0/3] Introduce Thermal Pressure

From: Ingo Molnar
Date: Wed Apr 17 2019 - 01:36:34 EST



* Thara Gopinath <thara.gopinath@xxxxxxxxxx> wrote:

> The test results below shows 3-5% improvement in performance when
> using the third solution compared to the default system today where
> scheduler is unware of cpu capacity limitations due to thermal events.

The numbers look very promising!

I've rearranged the results to make the performance properties of the
various approaches and parameters easier to see:

(seconds, lower is better)

Hackbench Aobench Dhrystone
========= ======= =========
Vanilla kernel (No Thermal Pressure) 10.21 141.58 1.14
Instantaneous thermal pressure 10.16 141.63 1.15
Thermal Pressure Averaging:
- PELT fmwk 9.88 134.48 1.19
- non-PELT Algo. Decay : 500 ms 9.94 133.62 1.09
- non-PELT Algo. Decay : 250 ms 7.52 137.22 1.012
- non-PELT Algo. Decay : 125 ms 9.87 137.55 1.12


Firstly, a couple of questions about the numbers:

1)

Is the 1.012 result for "non-PELT 250 msecs Dhrystone" really 1.012?
You reported it as:

non-PELT Algo. Decay : 250 ms 1.012 7.02%

But the formatting is significant 3 digits versus only two for all
the other results.

2)

You reported the hackbench numbers with "10 runs" - did the other
benchmarks use 10 runs as well? Maybe you used fewer runs for the
longest benchmark, Aobench?

Secondly, it appears the non-PELT decaying average is the best approach,
but the results are a bit coarse around the ~250 msecs peak. Maybe it
would be good to measure it in 50 msecs steps between 50 msecs and 1000
msecs - but only if it can be scripted sanely:

A possible approach would be to add a debug sysctl for the tuning period,
and script all these benchmark runs and the printing of the results. You
could add another (debug) sysctl to turn the 'instant' logic on, and to
restore vanilla kernel behavior as well - this makes it all much easier
to script and measure with a single kernel image, without having to
reboot the kernel. The sysctl overhead will not be measurable for
workloads like this.

Then you can use "perf stat --null --table" to measure runtime and stddev
easily and with a single tool, for example:

dagon:~> perf stat --null --sync --repeat 10 --table ./hackbench 20 >benchmark.out

Performance counter stats for './hackbench 20' (10 runs):

# Table of individual measurements:
0.15246 (-0.03960) ######
0.20832 (+0.01627) ##
0.17895 (-0.01310) ##
0.19791 (+0.00585) #
0.19209 (+0.00004) #
0.19406 (+0.00201) #
0.22484 (+0.03278) ###
0.18695 (-0.00511) #
0.19032 (-0.00174) #
0.19464 (+0.00259) #

# Final result:
0.19205 +- 0.00592 seconds time elapsed ( +- 3.08% )

Note how all the individual measurements can be captured this way,
without seeing the benchmark output itself. So difference benchmarks can
be measured this way, assuming they don't have too long setup time.

Thanks,

Ingo