Re: [PATCH] LoongArch: Fixup cmpxchg sematic for memory barrier

From: WANG Rui
Date: Mon Jul 31 2023 - 22:29:51 EST


Hello,

On Tue, Aug 1, 2023 at 9:16 AM <guoren@xxxxxxxxxx> wrote:
> diff --git a/arch/loongarch/include/asm/cmpxchg.h b/arch/loongarch/include/asm/cmpxchg.h
> index 979fde61bba8..6a05b92814b6 100644
> --- a/arch/loongarch/include/asm/cmpxchg.h
> +++ b/arch/loongarch/include/asm/cmpxchg.h
> @@ -102,8 +102,8 @@ __arch_xchg(volatile void *ptr, unsigned long x, int size)
> " move $t0, %z4 \n" \
> " " st " $t0, %1 \n" \
> " beqz $t0, 1b \n" \
> - "2: \n" \
> __WEAK_LLSC_MB \
> + "2: \n" \

Thanks for the patch.

This would look pretty good if it weren't for the special memory
barrier semantics of the LoongArch's LL and SC instructions.

The LL/SC memory barrier behavior of LoongArch:

* LL: <memory-barrier> + <load-exclusive>
* SC: <store-conditional> + <memory-barrier>

and the LoongArch's weak memory model allows load/load reorder for the
same address.

So, the __WEAK_LLSC_MB[1] is used to prevent load/load reorder and no
explicit barrier instruction is required after SC.

[1] https://lore.kernel.org/loongarch/20230516124536.535343-1-chenhuacai@xxxxxxxxxxx/

Regards,
--
WANG Rui