Re: [PATCH -v2 19/33] locking,powerpc: Implement atomic{,64}_fetch_{add,sub,and,or,xor}{,_relaxed,_acquire,_release}()

From: Boqun Feng
Date: Tue May 31 2016 - 23:08:14 EST


Hi Peter,

On Tue, May 31, 2016 at 12:19:44PM +0200, Peter Zijlstra wrote:
[snip]
>
> @@ -329,20 +361,53 @@ atomic64_##op##_return_relaxed(long a, a
> return t; \
> }
>
> +#define ATOMIC64_FETCH_OP_RELAXED(op, asm_op) \
> +static inline long \
> +atomic64_fetch_##op##_relaxed(long a, atomic64_t *v) \
> +{ \
> + long res, t; \
> + \
> + __asm__ __volatile__( \
> +"1: ldarx %0,0,%4 # atomic64_fetch_" #op "_relaxed\n" \
> + #asm_op " %1,%3,%0\n" \
> +" stdcx. %1,0,%4\n" \
> +" bne- 1b\n" \
> + : "=&r" (res), "=&r" (t), "+m" (v->counter) \
> + : "r" (a), "r" (&v->counter) \
> + : "cc"); \
> + \
> + return t; \

Looks like I missed this one in v1, it should be

return res;

because the primitives will return the values before modified by the
operations.

Regards,
Boqun

> +}
> +

Attachment: signature.asc
Description: PGP signature