Re: [RFC PATCH v2 3/4] irq: Introduce IRQ_HANDLED_MANY

From: Thomas Gleixner
Date: Mon Feb 19 2024 - 06:03:17 EST


On Mon, Feb 19 2024 at 10:59, Thomas Gleixner wrote:
> On Fri, Feb 16 2024 at 04:59, Leonardo Bras wrote:
>
>> In threaded IRQs, some irq handlers are able to handle many requests at a
>> single run, but this is only accounted as a single IRQ_HANDLED when
>> increasing threads_handled.
>>
>> In order to fix this, introduce IRQ_HANDLED_MANY, so the returned value of
>> those IRQ handlers are able to signal that many IRQs got handled at that
>> run.
>>
>> Is scenarios where there is no need to keep track of IRQ handled, convert
>> it back to IRQ_HANDLED.
>
> That's not really workable as you'd have to update tons of drivers just
> to deal with that corner case. That's error prone and just extra
> complexity all over the place.
>
> This really needs to be solved in the core code.

Something like the uncompiled below should do the trick.

Thanks,

tglx
---
--- a/include/linux/irqdesc.h
+++ b/include/linux/irqdesc.h
@@ -38,7 +38,8 @@ struct pt_regs;
* @affinity_notify: context for notification of affinity changes
* @pending_mask: pending rebalanced interrupts
* @threads_oneshot: bitfield to handle shared oneshot threads
- * @threads_active: number of irqaction threads currently running
+ * @threads_active: number of irqaction threads currently activated
+ * @threads_running: number of irqaction threads currently running
* @wait_for_threads: wait queue for sync_irq to wait for threaded handlers
* @nr_actions: number of installed actions on this descriptor
* @no_suspend_depth: number of irqactions on a irq descriptor with
@@ -80,6 +81,7 @@ struct irq_desc {
#endif
unsigned long threads_oneshot;
atomic_t threads_active;
+ atomic_t threads_running;
wait_queue_head_t wait_for_threads;
#ifdef CONFIG_PM_SLEEP
unsigned int nr_actions;
--- a/kernel/irq/manage.c
+++ b/kernel/irq/manage.c
@@ -1194,9 +1194,11 @@ irq_forced_thread_fn(struct irq_desc *de
local_bh_disable();
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
local_irq_disable();
+ atomic_inc(&desc->threads_running);
ret = action->thread_fn(action->irq, action->dev_id);
if (ret == IRQ_HANDLED)
atomic_inc(&desc->threads_handled);
+ atomic_dec(&desc->threads_running);

irq_finalize_oneshot(desc, action);
if (!IS_ENABLED(CONFIG_PREEMPT_RT))
--- a/kernel/irq/spurious.c
+++ b/kernel/irq/spurious.c
@@ -350,6 +350,12 @@ void note_interrupt(struct irq_desc *des
desc->threads_handled_last = handled;
} else {
/*
+ * Avoid false positives when there is
+ * actually a thread running.
+ */
+ if (atomic_read(&desc->threads_running))
+ return;
+ /*
* None of the threaded handlers felt
* responsible for the last interrupt
*