Re: [PATCH] perf, x86: catch spurious interrupts after disablingcounters

From: Don Zickus
Date: Thu Sep 23 2010 - 20:03:04 EST


On Wed, Sep 15, 2010 at 06:20:34PM +0200, Robert Richter wrote:
> On 14.09.10 19:41:32, Robert Richter wrote:
> > I found the reason why we get the unknown nmi. For some reason
> > cpuc->active_mask in x86_pmu_handle_irq() is zero. Thus, no counters
> > are handled when we get an nmi. It seems there is somewhere a race
> > accessing the active_mask. So far I don't have a fix available.
> > Changing x86_pmu_stop() did not help:
>
> The patch below for tip/perf/urgent fixes this.
>
> -Robert

I was able to duplicate the problem and can confirm this patch fixes the
issue for me. I tried poking around (similar to things Robert probably
did) and had no luck. Something just doesn't make sense, but I guess for
now this patch is good enough for me. :-)

Cheers,
Don

>
> From 4206a086f5b37efc1b4d94f1d90b55802b299ca0 Mon Sep 17 00:00:00 2001
> From: Robert Richter <robert.richter@xxxxxxx>
> Date: Wed, 15 Sep 2010 16:12:59 +0200
> Subject: [PATCH] perf, x86: catch spurious interrupts after disabling counters
>
> Some cpus still deliver spurious interrupts after disabling a counter.
> This caused 'undelivered NMI' messages. This patch fixes this.
>
> Signed-off-by: Robert Richter <robert.richter@xxxxxxx>
> ---
> arch/x86/kernel/cpu/perf_event.c | 13 ++++++++++++-
> 1 files changed, 12 insertions(+), 1 deletions(-)
>
> diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
> index 3efdf28..df7aabd 100644
> --- a/arch/x86/kernel/cpu/perf_event.c
> +++ b/arch/x86/kernel/cpu/perf_event.c
> @@ -102,6 +102,7 @@ struct cpu_hw_events {
> */
> struct perf_event *events[X86_PMC_IDX_MAX]; /* in counter order */
> unsigned long active_mask[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
> + unsigned long running[BITS_TO_LONGS(X86_PMC_IDX_MAX)];
> int enabled;
>
> int n_events;
> @@ -1010,6 +1011,7 @@ static int x86_pmu_start(struct perf_event *event)
> x86_perf_event_set_period(event);
> cpuc->events[idx] = event;
> __set_bit(idx, cpuc->active_mask);
> + __set_bit(idx, cpuc->running);
> x86_pmu.enable(event);
> perf_event_update_userpage(event);
>
> @@ -1141,8 +1143,17 @@ static int x86_pmu_handle_irq(struct pt_regs *regs)
> cpuc = &__get_cpu_var(cpu_hw_events);
>
> for (idx = 0; idx < x86_pmu.num_counters; idx++) {
> - if (!test_bit(idx, cpuc->active_mask))
> + if (!test_bit(idx, cpuc->active_mask)) {
> + if (__test_and_clear_bit(idx, cpuc->running))
> + /*
> + * Though we deactivated the counter
> + * some cpus might still deliver
> + * spurious interrupts. Catching them
> + * here.
> + */
> + handled++;
> continue;
> + }
>
> event = cpuc->events[idx];
> hwc = &event->hw;
> --
> 1.7.2.2
>
>
>
>
>
> --
> Advanced Micro Devices, Inc.
> Operating System Research Center
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/