Re: [patch 2/3] scheduler: add full memory barriers upon task switchat runqueue lock/unlock

From: Linus Torvalds
Date: Mon Feb 01 2010 - 10:28:55 EST




On Sun, 31 Jan 2010, Mathieu Desnoyers wrote:
>
> Adds no overhead on x86, because LOCK-prefixed atomic operations of the spin
> lock/unlock already imply a full memory barrier.

.. and as Nick pointed out, you're fundamentally incorrect on this.
unlock on x86 is no memory barrier at all, since the x86 memory ordering
rules are such that a regular store always has release consistency.

But more importantly, you don't even explain why the addded smp_mb()
helps.

Why does a smp_mb() at the lock/unlock even matter? Reading accesses by
the same CPU sure as hell do _not_ matter, so the whole concept seems
totally broken. There is no way in _hell_ that whatever unlocked thing
can ever write the variables protected by the lock, only read them. So a
full memory barrier makes zero sense to begin with.

So what are these magical memory barriers all about?

Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/