Re: [PATCH V7 1/5] asm-generic: ticket-lock: Remove unnecessary atomic_read

From: Waiman Long
Date: Tue Jun 28 2022 - 14:06:27 EST


On 6/28/22 04:17, guoren@xxxxxxxxxx wrote:
From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>

Remove unnecessary atomic_read in arch_spin_value_unlocked(lock),
because the value has been in lock. This patch could prevent
arch_spin_value_unlocked contend spin_lock data again.

Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>
Signed-off-by: Guo Ren <guoren@xxxxxxxxxx>
Cc: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Cc: Arnd Bergmann <arnd@xxxxxxxx>
Cc: Palmer Dabbelt <palmer@xxxxxxxxxxxx>
---
include/asm-generic/spinlock.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
index fdfebcb050f4..f1e4fa100f5a 100644
--- a/include/asm-generic/spinlock.h
+++ b/include/asm-generic/spinlock.h
@@ -84,7 +84,9 @@ static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
{
- return !arch_spin_is_locked(&lock);
+ u32 val = lock.counter;
+
+ return ((val >> 16) == (val & 0xffff));
}
#include <asm/qrwlock.h>

lockref.c is the only current user of arch_spin_value_unlocked(). This change is probably OK with this particular use case. Do you have any performance data about the improvement due to this change?

You may have to document that we have to revisit that if another use case shows up.

Cheers,
Longman