Re: [PATCH v2] x86: Reduce clock calibration time during slave cpustartup

From: Ingo Molnar
Date: Tue Aug 09 2011 - 11:07:27 EST



* Matthew Garrett <mjg@xxxxxxxxxx> wrote:

> On Fri, Aug 05, 2011 at 11:38:36PM +0200, Ingo Molnar wrote:
>
> > Well, it still uses heuristics: it assumes frequency is the same
> > when the cpuid data tells us that two CPUs are on the same
> > socket, right?
>
> If we only assume that when we have a constant TSC then it's a
> pretty safe assumption - the delay loop will be calibrated against
> the TSC, and the TSC will be constant across the package regardless
> of what frequency the cores are actually running at.

The delay loop might be calibrated against the TSC, but the amount of
real delay we get when we loop 100,000 times will be frequency
dependent.

What we probably want is the most conservative udelay calibration:
have a lpj value measured on the highest possible frequency - this
way hardware components can never be overclocked by a driver.

Or does udelay() scale with the current frequency of the CPU?

Thanks,

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/