[RFC PATCH 1/3] lockdep/irq: Be more strict about IRQ-threadable code end

From: Frederic Weisbecker
Date: Sun Mar 22 2020 - 23:32:27 EST


struct task_struct::hardirq_threaded is set before the IRQ handler
is invoked but is not cleared back explicitly. This is done on the next
time trace_hardirq_enter() is called, which may be at best after softirq
execution, if any, or at worst by the next hardirq entry.

Besides being confusing, this exposes all the code in hardirq that
follows the handler outside lockdep vigilance concerning LD_WAIT_CONFIG
locking.

Lets rather be paranoid and make sure we properly define the end of an
LD_WAIT_CONFIG safe block.

Signed-off-by: Frederic Weisbecker <frederic@xxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx>
---
include/linux/irqflags.h | 22 +++++++++++++++-------
kernel/irq/handle.c | 10 ++++++++--
2 files changed, 23 insertions(+), 9 deletions(-)

diff --git a/include/linux/irqflags.h b/include/linux/irqflags.h
index a16adbb58f66..28481702460e 100644
--- a/include/linux/irqflags.h
+++ b/include/linux/irqflags.h
@@ -37,17 +37,24 @@
# define trace_softirqs_enabled(p) ((p)->softirqs_enabled)
# define trace_hardirq_enter() \
do { \
- if (!current->hardirq_context++) \
- current->hardirq_threaded = 0; \
-} while (0)
-# define trace_hardirq_threaded() \
-do { \
- current->hardirq_threaded = 1; \
+ current->hardirq_context++; \
} while (0)
+
# define trace_hardirq_exit() \
do { \
current->hardirq_context--; \
} while (0)
+
+# define trace_hardirq_threaded() \
+do { \
+ current->hardirq_threaded = 1; \
+} while (0)
+
+# define trace_hardirq_unthreaded() \
+do { \
+ current->hardirq_threaded = 0; \
+} while (0)
+
# define lockdep_softirq_enter() \
do { \
current->softirq_context++; \
@@ -98,8 +105,9 @@ do { \
# define trace_hardirqs_enabled(p) 0
# define trace_softirqs_enabled(p) 0
# define trace_hardirq_enter() do { } while (0)
-# define trace_hardirq_threaded() do { } while (0)
# define trace_hardirq_exit() do { } while (0)
+# define trace_hardirq_threaded() do { } while (0)
+# define trace_hardirq_unthreaded() do { } while (0)
# define lockdep_softirq_enter() do { } while (0)
# define lockdep_softirq_exit() do { } while (0)
# define lockdep_hrtimer_enter(__hrtimer) do { } while (0)
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 16ee716e8291..39d6cf9f5853 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -144,18 +144,24 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags

for_each_action_of_desc(desc, action) {
irqreturn_t res;
+ bool threadable;

/*
* If this IRQ would be threaded under force_irqthreads, mark it so.
*/
- if (irq_settings_can_thread(desc) &&
- !(action->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT)))
+ threadable = (irq_settings_can_thread(desc) &&
+ !(action->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT)));
+
+ if (threadable)
trace_hardirq_threaded();

trace_irq_handler_entry(irq, action);
res = action->handler(irq, action->dev_id);
trace_irq_handler_exit(irq, action, res);

+ if (threadable)
+ trace_hardirq_unthreaded();
+
if (WARN_ONCE(!irqs_disabled(),"irq %u handler %pS enabled interrupts\n",
irq, action->handler))
local_irq_disable();
--
2.25.0