Re: [PATCH] compiler: enable CONFIG_OPTIMIZE_INLINING forcibly

From: Will Deacon
Date: Tue Oct 01 2019 - 05:28:32 EST


Hi Nick,

On Mon, Sep 30, 2019 at 02:50:10PM -0700, Nick Desaulniers wrote:
> On Mon, Sep 30, 2019 at 5:18 AM Will Deacon <will@xxxxxxxxxx> wrote:
> > On Mon, Sep 30, 2019 at 09:05:11PM +0900, Masahiro Yamada wrote:
> > > On Mon, Sep 30, 2019 at 8:26 PM Will Deacon <will@xxxxxxxxxx> wrote:
> > > > FWIW, we've run into issues with CONFIG_OPTIMIZE_INLINING and local
> > > > variables marked as 'register' where GCC would do crazy things and end
> > > > up corrupting data, so I suspect the use of fixed registers in the arm
> > > > uaccess functions is hitting something similar:
> > > >
> > > > https://gcc.gnu.org/bugzilla/show_bug.cgi?id=91111
> > >
> > > No. Not similar at all.
> >
> > They're similar in that enabling CONFIG_OPTIMIZE_INLINING causes register
> > variables to go wrong. I agree that the ARM code looks dodgy with
> > that call to uaccess_save_and_enable(), but there are __asmeq macros
> > in there to try to catch that, so it's still very fishy.
> >
> > > I fixed it already. See
> > > https://lore.kernel.org/patchwork/patch/1132459/
> >
> > You fixed the specific case above for 32-bit ARM, but the arm64 case
> > is due to a compiler bug. As it happens, we've reworked our atomics
> > in 5.4 so that particular issue no longer triggers, but the fact remains
> > that GCC has been shown to screw up explicit register allocation for
> > perfectly legitimate code when giving the flexibility to move code out
> > of line.
>
> So __attribute__((always_inline)) doesn't guarantee that code will be
> inlined. For instance in LLVM's inliner, it asks/answers "should I
> inline" and "can I inline." "Should" has to do with a cost model, and
> is very heuristic-y. "Can" has more to do with the transforms, and
> whether they're all implemented and safe. If you if you say
> __attribute__((always_inline)), the answer to "can I inline this" can
> still be *no*. The only way to guarantee inlining is via the C
> preprocessor. The only way to prevent inlining is via
> __attribute__((no_inline)). inline and __attribute__((always_inline))
> are a heuristic laden mess and should not be relied upon. I would
> also look closely at code that *requires* inlining or the lack there
> of to be correct. That the kernel no longer compiles at -O0 is not a
> good thing IMO, and hurts developers that want a short
> compile/execute/debug cycle.
>
> In this case, if there's a known codegen bug in a particular compiler
> or certain versions of it, I recommend the use of either the C
> preprocessor or __attribute__((no_inline)) to get the desired behavior
> localized to the function in question, and for us to proceed with
> Masahiro's cleanup.

Hmm, I don't see how that would help. The problem occurs when things
are moved out of line by the compiler (see below).

> The comment above the use of CONFIG_OPTIMIZE_INLINING in
> include/linux/compiler_types.h says:
> * Force always-inline if the user requests it so via the .config.
> Which makes me grimace (__attribute__((always_inline)) doesn't *force*
> anything as per above), and the idea that forcing things marked inline
> to also be __attribute__((always_inline)) is an "optimization" (re:
> the name of the config; CONFIG_OPTIMIZE_INLINING) is also highly
> suspect. Aggressive inlining leads to image size bloat, instruction
> cache and register pressure; it is not exclusively an optimization.

Agreed on all of this, but the fact remains that GCC has been shown to
*miscompile* the arm64 kernel with CONFIG_OPTIMIZE_INLINING=y. Please,
look at this thread:

https://www.spinics.net/lists/arm-kernel/msg730329.html
https://www.spinics.net/lists/arm-kernel/msg730512.html

GCC decides to pull an atomic operation out-of-line and, in doing so,
gets the register allocations subtly wrong when passing a 'register'
variable into an inline asm. I would like to avoid this sort of thing
happening, since it can result in really nasty bugs that manifest at
runtime and are extremely difficult to debug, which is why I would much
prefer not to have this option on by default for arm64. I sent a patch
already:

https://lkml.kernel.org/r/20190930114540.27498-1-will@xxxxxxxxxx

and I'm happy to spin a v2 which depends on !CC_IS_CLANG as well.

Reducing the instruction cache footprint is great, but not if the
resulting code is broken!

Will