Re: [PATCH] sparc: mark __arch_xchg() as __always_inline

From: Mark Rutland
Date: Wed Jun 28 2023 - 07:45:53 EST


On Wed, Jun 28, 2023 at 11:49:18AM +0200, Arnd Bergmann wrote:
> From: Arnd Bergmann <arnd@xxxxxxxx>
>
> An otherwise correct change to the atomic operations uncovered an
> existing bug in the sparc __arch_xchg() function, which is calls
> __xchg_called_with_bad_pointer() when its arguments are unknown at
> compile time:
>
> ERROR: modpost: "__xchg_called_with_bad_pointer" [lib/atomic64_test.ko] undefined!
>
> This now happens because gcc determines that it's better to not inline the
> function. Avoid this by just marking the function as __always_inline
> to force the compiler to do the right thing here.
>
> Reported-by: Guenter Roeck <linux@xxxxxxxxxxxx>
> Link: https://lore.kernel.org/all/c525adc9-6623-4660-8718-e0c9311563b8@xxxxxxxxxxxx/
> Fixes: d12157efc8e08 ("locking/atomic: make atomic*_{cmp,}xchg optional")
> Signed-off-by: Arnd Bergmann <arnd@xxxxxxxx>

Aha; you saved me writing a patch! :)

We should probably do likewise for all the other bits like __cmpxchg(), but
either way:

Acked-by: Mark Rutland <mark.rutland@xxxxxxx>

Mark.

> ---
> arch/sparc/include/asm/cmpxchg_32.h | 2 +-
> arch/sparc/include/asm/cmpxchg_64.h | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h
> index 7a1339533d1d7..d0af82c240b73 100644
> --- a/arch/sparc/include/asm/cmpxchg_32.h
> +++ b/arch/sparc/include/asm/cmpxchg_32.h
> @@ -15,7 +15,7 @@
> unsigned long __xchg_u32(volatile u32 *m, u32 new);
> void __xchg_called_with_bad_pointer(void);
>
> -static inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
> +static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
> {
> switch (size) {
> case 4:
> diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h
> index 66cd61dde9ec1..3de25262c4118 100644
> --- a/arch/sparc/include/asm/cmpxchg_64.h
> +++ b/arch/sparc/include/asm/cmpxchg_64.h
> @@ -87,7 +87,7 @@ xchg16(__volatile__ unsigned short *m, unsigned short val)
> return (load32 & mask) >> bit_shift;
> }
>
> -static inline unsigned long
> +static __always_inline unsigned long
> __arch_xchg(unsigned long x, __volatile__ void * ptr, int size)
> {
> switch (size) {
> --
> 2.39.2
>