Re: [PATCH v2] x86: Reduce clock calibration time during slave cpustartup

From: Matthew Garrett
Date: Tue Aug 09 2011 - 11:18:22 EST


On Tue, Aug 09, 2011 at 05:06:24PM +0200, Ingo Molnar wrote:

> The delay loop might be calibrated against the TSC, but the amount of
> real delay we get when we loop 100,000 times will be frequency
> dependent.

We don't have a situation where a system boots with one core in a
package fixed at one frequency and another core in the same package at
another. It's possible that they'll float independently due to cpufreq
changes (although that's not possible with most current hardware), but
we need to take that into account anyway.

> What we probably want is the most conservative udelay calibration:
> have a lpj value measured on the highest possible frequency - this
> way hardware components can never be overclocked by a driver.

There's no way to force the highest possible frequency. Calibration
occurs before cpuidle is running, and the only way to get the maximum
frequency on a given core is to have all the other cores in C6.

> Or does udelay() scale with the current frequency of the CPU?

If it doesn't then it's been broken for the past 8 years or so.
--
Matthew Garrett | mjg59@xxxxxxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/