[patch] lockdep: core, fix rq-lock handling on __ARCH_WANT_UNLOCKED_CTXSW

From: Ingo Molnar
Date: Wed Jul 12 2006 - 16:30:30 EST


Subject: lockdep: core, fix rq-lock handling on __ARCH_WANT_UNLOCKED_CTXSW
From: Ingo Molnar <mingo@xxxxxxx>

on platforms that have __ARCH_WANT_UNLOCKED_CTXSW set and want to
implement lock validator support there's a bug in rq->lock handling: in
this case we dont 'carry over' the runqueue lock into another task - but
still we did a spinlock_release() of it. Fix this by making the
spinlock_release() in context_switch() dependent on
!__ARCH_WANT_UNLOCKED_CTXSW.

(Reported by Ralf Baechle on MIPS, which has __ARCH_WANT_UNLOCKED_CTXSW.
This fixes a lockdep-internal BUG message on such platforms.)

Signed-off-by: Ingo Molnar <mingo@xxxxxxx>
---
kernel/sched.c | 8 ++++++++
1 file changed, 9 insertions(+)

Index: linux/kernel/sched.c
===================================================================
--- linux.orig/kernel/sched.c
+++ linux/kernel/sched.c
@@ -1788,7 +1788,15 @@ context_switch(struct rq *rq, struct tas
WARN_ON(rq->prev_mm);
rq->prev_mm = oldmm;
}
+ /*
+ * Since the runqueue lock will be released by the next
+ * task (which is an invalid locking op but in the case
+ * of the scheduler it's an obvious special-case), so we
+ * do an early lockdep release here:
+ */
+#ifndef __ARCH_WANT_UNLOCKED_CTXSW
spin_release(&rq->lock.dep_map, 1, _THIS_IP_);
+#endif

/* Here we just switch the register state and the stack. */
switch_to(prev, next, prev);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/