Re: [patch 2/2] powerpc: native atomic_add_unless

From: Joel Schopp
Date: Wed Jan 18 2006 - 12:10:28 EST


I didn't convert LL/SC architectures so as to "encourage" them
to do atomic_add_unless natively. Here is my probably-wrong attempt
for powerpc.

Should I bring this up on the arch list? (IIRC cross posting between there and lkml is discouraged)


You should bring this up on the arch list or it is likely to be missed by people who would give you useful feedback.

+static __inline__ int atomic_add_unless(atomic_t *v, int a, int u)
+{
+ int t;
+ int dummy;
+
+ __asm__ __volatile__ (
+ LWSYNC_ON_SMP

I realize to preserve the behavior you currently get with the cmpxchg currently used to implement atomic_add_unless that you feel the need to put in an lwsync & isync. But I would point out that neither is necessary to actually atomic add unless. They are simply so cmpxchg can be overloaded to be used as both a lock and unlock primitive. If atomic_add_unless isn't being used as a locking primitive somewhere I would love for these to be dropped.

+"1: lwarx %0,0,%2 # atomic_add_unless\n\
+ cmpw 0,%0,%4 \n\

This is just personal preference, but I find it more readable to use the simplified mnemonic:
cmpw %0, %4

+ beq- 2f \n\
+ add %1,%3,%0 \n"

Why use a separate register here? Why not reuse %0 instead of using %1? Registers are valuable.

+ PPC405_ERR77(0,%2)
+" stwcx. %1,0,%2 \n\
+ bne- 1b \n"
+ ISYNC_ON_SMP
+ "\n\
+2:"
+ : "=&r" (t), "=&r" (dummy)
+ : "r" (&v->counter), "r" (a), "r" (u)
+ : "cc", "memory");
+
+ return likely(t != u);
+}
+
#define atomic_inc_not_zero(v) atomic_add_unless((v), 1, 0)

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/