Re: [PATCH v10 03/19] qspinlock: Add pending bit

From: Waiman Long
Date: Fri May 09 2014 - 20:50:22 EST


On 05/08/2014 02:57 PM, Peter Zijlstra wrote:
On Wed, May 07, 2014 at 11:01:31AM -0400, Waiman Long wrote:
+/**
+ * trylock_pending - try to acquire queue spinlock using the pending bit
+ * @lock : Pointer to queue spinlock structure
+ * @pval : Pointer to value of the queue spinlock 32-bit word
+ * Return: 1 if lock acquired, 0 otherwise
+ */
+static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
Still don't like you put it in a separate function, but you don't need
the pointer thing. Note how after you fail the trylock_pending() you
touch the second (node) cacheline.

@@ -110,6 +184,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)

BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS));

+ if (trylock_pending(lock,&val))
+ return; /* Lock acquired */
+
node = this_cpu_ptr(&mcs_nodes[0]);
idx = node->count++;
tail = encode_tail(smp_processor_id(), idx);
@@ -119,15 +196,18 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
node->next = NULL;

/*
+ * we already touched the queueing cacheline; don't bother with pending
+ * stuff.
+ *
* trylock || xchg(lock, node)
*
- * 0,0 -> 0,1 ; trylock
- * p,x -> n,x ; prev = xchg(lock, node)
+ * 0,0,0 -> 0,0,1 ; trylock
+ * p,y,x -> n,y,x ; prev = xchg(lock, node)
*/
And any value of @val we might have had here is completely out-dated.
The only thing that makes sense it to set:

val = 0;

Which makes us start with a trylock, alternatively we can re-read val.

That is true. I will make the change to get rid of the pointer thing.

As for the separate trylock_pending function, my original goal was to have a better delineation of different portions of the code. Given the fact that I broke up the slowpath function into 2 in a later patch, I may not really need to separate it out. I will pull it back in the next version.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/