Re: [PATCH v5 tip/core/locking 5/7]Documentation/memory-barriers.txt: Downgrade UNLOCK+LOCK

From: Josh Triplett
Date: Mon Dec 09 2013 - 20:32:55 EST


On Mon, Dec 09, 2013 at 05:28:01PM -0800, Paul E. McKenney wrote:
> From: "Paul E. McKenney" <paulmck@xxxxxxxxxxxxxxxxxx>
>
> Historically, an UNLOCK+LOCK pair executed by one CPU, by one task,
> or on a given lock variable has implied a full memory barrier. In a
> recent LKML thread, the wisdom of this historical approach was called
> into question: http://www.spinics.net/lists/linux-mm/msg65653.html,
> in part due to the memory-order complexities of low-handoff-overhead
> queued locks on x86 systems.
>
> This patch therefore removes this guarantee from the documentation, and
> further documents how to restore it via a new smp_mb__after_unlock_lock()
> primitive.
>
> Signed-off-by: Paul E. McKenney <paulmck@xxxxxxxxxxxxxxxxxx>
> Cc: Ingo Molnar <mingo@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
> Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
> Cc: Will Deacon <will.deacon@xxxxxxx>
> Cc: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
> Cc: Waiman Long <waiman.long@xxxxxx>
> Cc: Andrea Arcangeli <aarcange@xxxxxxxxxx>
> Cc: Andi Kleen <andi@xxxxxxxxxxxxxx>
> Cc: Michel Lespinasse <walken@xxxxxxxxxx>
> Cc: Davidlohr Bueso <davidlohr.bueso@xxxxxx>
> Cc: Rik van Riel <riel@xxxxxxxxxx>
> Cc: Peter Hurley <peter@xxxxxxxxxxxxxxxxxx>
> Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
> Cc: Arnd Bergmann <arnd@xxxxxxxx>
> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx>
> ---
> Documentation/memory-barriers.txt | 51 +++++++++++++++++++++++++++++++++------
> 1 file changed, 44 insertions(+), 7 deletions(-)
>
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index a0763db314ff..efb791d33e5a 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1626,7 +1626,10 @@ for each construct. These operations all imply certain barriers:
> operation has completed.
>
> Memory operations issued before the LOCK may be completed after the LOCK
> - operation has completed.
> + operation has completed. An smp_mb__before_spinlock(), combined
> + with a following LOCK, acts as an smp_wmb(). Note the "w",
> + this is smp_wmb(), not smp_mb(). The smp_mb__before_spinlock()
> + primitive is free on many architectures.

Gah. That seems highly error-prone; why isn't that
"smp_wmb__before_spinlock()"?

- Josh Triplett
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/