Re: [RFC] workqueue: Fix irq inversion deadlock in manage_workers()

From: Peter Zijlstra
Date: Mon Oct 09 2017 - 02:43:08 EST


On Sun, Oct 08, 2017 at 12:03:47PM -0700, Tejun Heo wrote:
> So, if I'm not mistaken, this is a regression caused by b9c16a0e1f73
> ("locking/mutex: Fix lockdep_assert_held() fail") which seems to
> replace irqsave operations inside mutex to unconditional irq ones.

No, it existed before that. You're looking at the DEBUG_MUTEX case, the
normal case looked like:

diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 4410a4af42a3..6ebc1902f779 100644
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -9,10 +9,6 @@
* !CONFIG_DEBUG_MUTEXES case. Most of them are NOPs:
*/

-#define spin_lock_mutex(lock, flags) \
- do { spin_lock(lock); (void)(flags); } while (0)
-#define spin_unlock_mutex(lock, flags) \
- do { spin_unlock(lock); (void)(flags); } while (0)
#define mutex_remove_waiter(lock, waiter, task) \
__list_del((waiter)->list.prev, (waiter)->list.next)


Which is exactly what lives today.