[PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier

From: guoren
Date: Mon Jul 31 2023 - 21:16:11 EST


From: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>

When cmpxchg failed, no memory barrier was needed to take. Only
when cmpxchg success and the new value is written, then the memory
barriers needed.

- cmpxchg_asm: Unnecessary __WEAK_LLSC_WB for the fail path would
reduce the performance of the cmpxchg loop trying.
- cmpxchg_small: Miss an necessary __WEAK_LLSC_WB when sc succeeds.
It's a bug because there is no memory synchronization
when sc succeeds.

Signed-off-by: Guo Ren <guoren@xxxxxxxxxxxxxxxxx>
Signed-off-by: Guo Ren <guoren@xxxxxxxxxx>
---
arch/loongarch/include/asm/cmpxchg.h | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
index 979fde61bba8..6a05b92814b6 100644
--- a/arch/loongarch/include/asm/cmpxchg.h
+++ b/arch/loongarch/include/asm/cmpxchg.h
@@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
" move $t0, %z4 \n" \
" " st " $t0, %1 \n" \
" beqz $t0, 1b \n" \
- "2: \n" \
__WEAK_LLSC_MB \
+ "2: \n" \
: "=&r" (__ret), "=ZB"(*m) \
: "ZB"(*m), "Jr" (old), "Jr" (new) \
: "t0", "memory"); \
@@ -148,10 +148,8 @@ static inline unsigned int __cmpxchg_small(volatile void *ptr, unsigned int old,
" or %1, %1, %z6 \n"
" sc.w %1, %2 \n"
" beqz %1, 1b \n"
- " b 3f \n"
- "2: \n"
__WEAK_LLSC_MB
- "3: \n"
+ "2: \n"
: "=&r" (old32), "=&r" (temp), "=ZC" (*ptr32)
: "ZC" (*ptr32), "Jr" (mask), "Jr" (old), "Jr" (new)
: "memory");
--
2.36.1