Re: [RFC] Bridging the gap between the Linux Kernel Memory Consistency Model (LKMM) and C11/C++11 atomics

From: Peter Zijlstra
Date: Fri Jul 07 2023 - 11:46:03 EST


On Fri, Jul 07, 2023 at 10:04:06AM -0400, Olivier Dion wrote:
> On Tue, 04 Jul 2023, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > On Mon, Jul 03, 2023 at 03:20:31PM -0400, Olivier Dion wrote:
> [...]
> >> On x86-64 (gcc 13.1 -O2) we get:
> >>
> >> t0():
> >> movl $1, x(%rip)
> >> movl $1, %eax
> >> xchgl dummy(%rip), %eax
> >> lock orq $0, (%rsp) ;; Redundant with previous exchange.
> >> movl y(%rip), %eax
> >> movl %eax, r0(%rip)
> >> ret
> >> t1():
> >> movl $1, y(%rip)
> >> lock orq $0, (%rsp)
> >> movl x(%rip), %eax
> >> movl %eax, r1(%rip)
> >> ret
> >
> > So I would expect the compilers to do better here. It should know those
> > __atomic_thread_fence() thingies are superfluous and simply not emit
> > them. This could even be done as a peephole pass later, where it sees
> > consecutive atomic ops and the second being a no-op.
>
> Indeed, a peephole optimization could work for this Dekker, if the
> compiler adds the pattern for it. However, AFAIK, a peephole can not be
> applied when the two fences are in different basic blocks. For example,
> only emitting a fence on a compare_exchange success. This limitation
> implies that the optimization can not be done across functions/modules
> (shared libraries).

LTO FTW :-)

> For example, it would be interesting to be able to
> promote an acquire fence of a pthread_mutex_lock() to a full fence on
> weakly ordered architectures while preventing a redundant fence on
> strongly ordered architectures.

That's a very non-trivial thing to do. I know Linux has
smp_mb__after_spinlock() and that x86 has it a no-op, but even on x86
adding a full fence after a lock has observable differences IIRC.

Specifically, the actual store that acquires the lock is not well
ordered vs the critical section itself for non-trivial spinlock
implementations (notably qspinlock).

For RCU you mostly care about RCsc locks (IIRC), and upgrading unlock is
a 'simpler' (IMO) approach to achieve that (which is what RCU does with
smp_mb_after_unlock_lock()).

> We know that at least Clang has such peephole optimizations for some
> architecture backends. It seems however that they do not recognize
> lock-prefixed instructions as fence.

They seem confused in general for emitting MFENCE.

> AFAIK, GCC does not have that kind
> of optimization.

> We are also aware that some research has been done on this topic [0].
> The idea is to use PRE for elimiation of redundant fences. This would
> work across multiple basic blocks, although the paper focus on
> intra-procedural eliminations. However, it seems that the latest work
> on that [1] has never been completed [2].
>
> Our proposed approach provides a mean for the user to express -- and
> document -- the wanted semantic in the source code. This allows the
> compiler to only emit wanted fences, therefore not relying on
> architecture specific backend optimizations. In other words, this
> applies even on unoptimized binaries.

I'm not a tool person, but if I were, I'd be very hesitant to add
__builtin functions that 'conflict'/'overlap' with what an optimizer
should be able to do.

Either way around you need work done on the compilers, and I'm thinking
'fixing' the optimizer will benefit far more people than adding
__builtin's.

Then again, I'm not a tools person, so you don't need to convince me.
But one of the selling points of the whole Atomics as a language feature
was that whole optimizer angle. Otherwise you might as well do as we do,
inline asm the world.

I'll shut up now, thanks for that PRE reference [0], that seems a fun
read for when I'm bored.