Re: [PATCH] Partially revert "compiler: enable CONFIG_OPTIMIZE_INLINING forcibly"

From: Andrew Murray
Date: Tue Oct 01 2019 - 07:41:43 EST


On Tue, Oct 01, 2019 at 11:42:54AM +0100, Will Deacon wrote:
> On Tue, Oct 01, 2019 at 06:40:26PM +0900, Masahiro Yamada wrote:
> > On Mon, Sep 30, 2019 at 8:45 PM Will Deacon <will@xxxxxxxxxx> wrote:
> > > diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
> > > index 93d97f9b0157..c37c72adaeff 100644
> > > --- a/lib/Kconfig.debug
> > > +++ b/lib/Kconfig.debug
> > > @@ -312,6 +312,7 @@ config HEADERS_CHECK
> > >
> > > config OPTIMIZE_INLINING
> > > def_bool y
> > > + depends on !(ARM || ARM64) # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91111
> >
> >
> > This is a too big hammer.
>
> It matches the previous default behaviour!
>
> > For ARM, it is not a compiler bug, so I am trying to fix the kernel code.
> >
> > For ARM64, even if it is a compiler bug, you can add __always_inline
> > to the functions in question.
> > (arch_atomic64_dec_if_positive in this case).
> >
> > You do not need to force __always_inline globally.
>
> So you'd prefer I do something like the diff below? I mean, it's a start,
> but I do worry that we're hanging arch/arm/ out to dry.

If I've understood one part of this issue correctly - and using the
c2p_unsupported build failure as an example [1], there are instances in
the kernel where it is assumed that the compiler will optimise out a call
to an undefined function, and also assumed that the compiler will know
at compile time that the function will never get called. It's common to
satisfy this assumption when the calling function is inlined.

But I suspect there may be other cases similar to c2p_unsupported which
are still lurking.

For example the following functions are called but non-existent, and thus
may be an area worth investigating:

__buggy_use_of_MTHCA_PUT, __put_dbe_unknown, __cmpxchg_wrong_size,
__bad_percpu_size, __put_user_bad, __get_user_unknown,
__bad_unaligned_access_size, __bad_xchg

But more generally, as this is a common pattern - isn't there a benefit
here for changing all of these to BUILD_BUG? (So they can be found easily).

Or to avoid this class of issues, change them to BUG or unreachable - but
lose the benefit of compile time detection?

Thanks,

Andrew Murray

[1] https://lore.kernel.org/patchwork/patch/1122097/
>
> Will
>
> --->8
>
> diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
> index c6bd87d2915b..574808b9df4c 100644
> --- a/arch/arm64/include/asm/atomic_lse.h
> +++ b/arch/arm64/include/asm/atomic_lse.h
> @@ -321,7 +321,8 @@ static inline s64 __lse_atomic64_dec_if_positive(atomic64_t *v)
> }
>
> #define __CMPXCHG_CASE(w, sfx, name, sz, mb, cl...) \
> -static inline u##sz __lse__cmpxchg_case_##name##sz(volatile void *ptr, \
> +static __always_inline u##sz \
> +__lse__cmpxchg_case_##name##sz(volatile void *ptr, \
> u##sz old, \
> u##sz new) \
> { \
> @@ -362,7 +363,8 @@ __CMPXCHG_CASE(x, , mb_, 64, al, "memory")
> #undef __CMPXCHG_CASE
>
> #define __CMPXCHG_DBL(name, mb, cl...) \
> -static inline long __lse__cmpxchg_double##name(unsigned long old1, \
> +static __always_inline long \
> +__lse__cmpxchg_double##name(unsigned long old1, \
> unsigned long old2, \
> unsigned long new1, \
> unsigned long new2, \
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@xxxxxxxxxxxxxxxxxxx
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel