[027/136] xen: use stronger barrier after unlocking lock

From: Greg KH
Date: Thu Oct 01 2009 - 21:57:41 EST


2.6.31-stable review patch. If anyone has any objections, please let us know.

------------------
From: Yang Xiaowei <xiaowei.yang@xxxxxxxxx>

commit 2496afbf1e50c70f80992656bcb730c8583ddac3 upstream.

We need to have a stronger barrier between releasing the lock and
checking for any waiting spinners. A compiler barrier is not sufficient
because the CPU's ordering rules do not prevent the read xl->spinners
from happening before the unlock assignment, as they are different
memory locations.

We need to have an explicit barrier to enforce the write-read ordering
to different memory locations.

Because of it, I can't bring up > 4 HVM guests on one SMP machine.

[ Code and commit comments expanded -J ]

[ Impact: avoid deadlock when using Xen PV spinlocks ]

Signed-off-by: Yang Xiaowei <xiaowei.yang@xxxxxxxxx>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxx>

---
arch/x86/xen/spinlock.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -326,8 +326,13 @@ static void xen_spin_unlock(struct raw_s
smp_wmb(); /* make sure no writes get moved after unlock */
xl->lock = 0; /* release lock */

- /* make sure unlock happens before kick */
- barrier();
+ /*
+ * Make sure unlock happens before checking for waiting
+ * spinners. We need a strong barrier to enforce the
+ * write-read ordering to different memory locations, as the
+ * CPU makes no implied guarantees about their ordering.
+ */
+ mb();

if (unlikely(xl->spinners))
xen_spin_unlock_slow(xl);


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/