[PATCH 1/1] x86, ticketlock: spin_unlock_wait() can livelock

From: Oleg Nesterov
Date: Mon Dec 01 2014 - 16:35:04 EST


arch_spin_unlock_wait() looks very suboptimal, to the point I think
this is just wrong and can lead to livelock: if the lock is heavily
contended we can never see head == tail.

But we do not need to wait for arch_spin_is_locked() == F. If it is
locked we only need to wait until the current owner drops this lock.
So we could simply spin until old_head != lock->tickets.head in this
case, but .head can overflow and thus we can't check "unlocked" only
once before the main loop.

Also, the "unlocked" check can ignore TICKET_SLOWPATH_FLAG bit.

Signed-off-by: Oleg Nesterov <oleg@xxxxxxxxxx>
---
arch/x86/include/asm/spinlock.h | 14 +++++++++++++-
1 files changed, 13 insertions(+), 1 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 9295016..a4efe47 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -183,8 +183,20 @@ static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,

static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
- while (arch_spin_is_locked(lock))
+ __ticket_t head = ACCESS_ONCE(lock->tickets.head);
+
+ for (;;) {
+ struct __raw_tickets tmp = ACCESS_ONCE(lock->tickets);
+ /*
+ * We need to check "unlocked" in a loop, tmp.head == head
+ * can be false positive because of overflow.
+ */
+ if (tmp.head == (tmp.tail & ~TICKET_SLOWPATH_FLAG) ||
+ tmp.head != head)
+ break;
+
cpu_relax();
+ }
}

/*
--
1.5.5.1


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/