Re: [PATCH v2 05/10] percpu: Wire up cmpxchg128

From: Peter Zijlstra
Date: Mon Feb 06 2023 - 07:15:19 EST


On Mon, Feb 06, 2023 at 12:24:00PM +0100, Peter Zijlstra wrote:

> > Unless I have misunderstood what you are doing, my concerns are
> > still the same:
> >
> > > #define this_cpu_cmpxchg(pcp, oval, nval) \
> > > - __pcpu_size_call_return2(this_cpu_cmpxchg_, pcp, oval, nval)
> > > + __pcpu_size16_call_return2(this_cpu_cmpxchg_, pcp, oval, nval)
> > > #define this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1,
> > > nval2) \
> > > __pcpu_double_call_return_bool(this_cpu_cmpxchg_double_, pcp1, pcp2,
> > > oval1, oval2, nval1, nval2)
> >
> > Having a variable-length this_cpu_cmpxchg() that turns into cmpxchg128()
> > and cmpxchg64() even on CPUs where this traps (!X86_FEATURE_CX16) seems
> > like a bad design to me.
> >
> > I would much prefer fixed-length this_cpu_cmpxchg64()/this_cpu_cmpxchg128()
> > calls that never trap but fall back to the generic version on CPUs that
> > are lacking the atomics.
>
> You're thinking acidental usage etc..? Lemme see what I can do.

So lookng at this I remember why I did it like this, currently 32bit
archs silently fall back to the generics for most/all 64bit ops.

And personally I would just as soon drop support for the
!X86_FEATURE_CX* cpus... :/ Those are some serious museum pieces.

One problem with silent downgrades like this is that semantics vs NMI
change, which makes for subtle bugs on said museum pieces.

Basically, using 64bit percpu ops on 32bit is already somewhat dangerous
-- wiring up native cmpxchg64 support in that case seemed an
improvement.

Anyway... let me get on with doing explicit
{raw,this}_cpu_cmpxchg{64,128}() thingies.