Re: [PATCH 8/9] clocksource: Improve unstable clocksource detection

From: Thomas Gleixner
Date: Tue Sep 01 2015 - 13:14:20 EST


On Mon, 31 Aug 2015, Shaohua Li wrote:
> On Mon, Aug 31, 2015 at 11:47:52PM +0200, Thomas Gleixner wrote:
> > On Mon, 31 Aug 2015, Shaohua Li wrote:
> > > > The HPET wraps interval is 0xffffffff / 100000000 = 42.9s
> > > >
> > > > tsc interval is (0x481250b45b - 0x219e6efb50) / 2200000000 = 75s
> > > >
> > > > 32.1 + 42.9 = 75
> > > >
> > > > The example shows hpet wraps, while tsc is marked unstable
> > >
> > > Thomas & John,
> > > Is this data enough to prove TSC unstable issue can be triggered by HPET
> > > wrap? I can resend the patch with the data included.
> >
> > Well, it's enough data to prove:
> >
> > - that keeping a VM off the CPU for 75 seconds is insane.
>
> It wraps in 42.9s. 42.9s isn't a long time hard to block. I donât think

You think that blocking softirq execution for 42.9 seconds is normal?
Seems we are living in a different universe.

> it's just VM off. A softirq can hog the cpu.

I still want to see prove of that. There is just handwaving about
that, but nobody has provided proper data to back that up.

> > - that emulating the HPET with 100MHz shortens the HPET wraparound by
> > a factor of 7 compared to real hardware. With a realist HPET
> > frequency you have about 300 seconds.
> >
> > Who though that using 100MHz HPET frequency is a brilliant idea?
>
> I'm not a VM expert. My guess is the 100Mhz can reduce interrupt. Itâs
> insane hypervisor updates HPET count in 14.3Mhz. Switching to HPET can
> introduce even higher overhead in virtual, because of the vmexit of
> iomemory access

Sorry, that does not make any sense at all.

- How does 100Mhz HPET frequency reduce interrupts?

- What's insane about a lower emulated HPET frequency?

- We all know that switching to HPET is more expensive than just
using TSC. That's not the question at all and completely
unrelated to the 100MHz HPET emulation frequency.

> > So we should add crappy heuristics to the watchdog just to workaround
> > virt insanities? I'm not convinced.
>
> This is a real issue which could impact performance seriously. Though
> the data is collected in vm, we do see the issue happens in physical
> machines too.

And what's the exact reason for this on physical machines? Some magic
softirq hog again for which you cannot provide proof?

> The watchdog clock source shows restriction here apparently, it
> deserves an improvement if we can do.

The restriction in a sane environment is 300 seconds. And the only
fallout on physical hardware which we have seen so far is on
preempt-RT where the softirq can actually be blocked by RT hogs, but
that's a completely different issue and has nothing to do with the
situation in mainline.

> I'm happy to hear from you if there is better solution, but we
> shouldn't pretend there is no issue here.

I'm not pretending anything. I'm merily refusing to accept that change
w/o a proper explanation WHY the watchdog fails on physical hardware,
i.e. WHY it does not run for more than 300 seconds.

Thanks,

tglx