Re: [PATCH v3] mutex: Documentation rewrite

From: Waiman Long
Date: Tue May 27 2014 - 12:03:27 EST


On 05/23/2014 11:16 AM, Davidlohr Bueso wrote:
From: Davidlohr Bueso<davidlohr@xxxxxx>


+
+(ii) midpath: aka optimistic spinning, tries to spin for acquisition
+ when there are no pending waiters and the lock owner is currently

That is not true. The spinning code doesn't check if there is any pending waiters. So some tasks can be spinning while the others are sleeping. The only checks are: 1) The lock owner is running; 2) there is no other higher-priority ready-to-run task (need_resched).

+ running on a different CPU. The rationale is that if the lock owner
+ is running, it is likely to release the lock soon. The mutex spinners
+ are queued up using MCS lock so that only one spinner can compete for
+ the mutex.
+
+ The MCS lock (proposed by Mellor-Crummey and Scott) is a simple spinlock
+ with the desirable properties of being fair and with each cpu trying
+ to acquire the lock spinning on a local variable. It avoids expensive
+ cacheline bouncing that common test-and-set spinlock implementations
+ incur. An MCS-like lock is specially tailored for optimistic spinning
+ for sleeping lock implementation. An important feature of the customized
+ MCS lock is that it has the extra property that spinners are able to exit
+ the MCS spinlock queue when they need to reschedule. This further helps
+ avoid situations where MCS spinners that need to reschedule would continue
+ waiting to spin on mutex owner, only to go directly to slowpath upon
+ obtaining the MCS lock.
+
+
+(iii) slowpath: last resort, if the lock is still unable to be acquired,
+ the task is added to the wait-queue and sleeps until it can be taken.

It would be more clear to say "until woken up by the unlock path".

Other than that, the others look fine to me.

-Longman




--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/