Re: [PATCH -v2 4/6] locking, arch: Update spin_unlock_wait()

From: Peter Zijlstra
Date: Fri May 27 2016 - 05:03:16 EST


On Fri, May 27, 2016 at 08:46:49AM +0200, Martin Schwidefsky wrote:
> > This fixes a number of spin_unlock_wait() users that (not
> > unreasonably) rely on this.
>
> All that is missing is an smp_rmb(), no?

Indeed.

> > --- a/arch/s390/include/asm/spinlock.h
> > +++ b/arch/s390/include/asm/spinlock.h
> > @@ -95,8 +95,7 @@ static inline void arch_spin_unlock(arch
> >
> > static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > {
> > - while (arch_spin_is_locked(lock))
> > - arch_spin_relax(lock);
> > + smp_cond_load_acquire(&lock->lock, !VAL);
> > }
> >
> > /*
>
> This change adds the smp_rmb() at the end of the waiting loop, but
> it also replaces arch_spin_relax() alias arch_lock_relax() with a
> cpu_relax(). This is not good, these two functions do *very* different
> things. cpu_relax() does an undirected yield with diagnose 0x44 but
> only if the system is non-SMT. arch_lock_relax() does an additional
> cpu_is_preempted() to test if the target cpu is running and does a
> directed yield with diagnose 0x9c.
>
> Why can't we just add the smp_rmb() to the arch_spin_unlock_wait()?

We can; I forgot about the special cpu_relax on s390, will fix.