Re: [PATCH v10 05/15] x86: support asm function sections

From: Peter Zijlstra
Date: Fri Feb 11 2022 - 10:45:58 EST


On Wed, Feb 09, 2022 at 07:57:42PM +0100, Alexander Lobakin wrote:
> Address places which need special care and enable
> CONFIG_ARCH_SUPPORTS_ASM_FUNCTION_SECTIONS.
>
> Notably:
> - propagate `--sectname-subst` to KBUILD_AFLAGS in
> x86/boot/Makefile and x86/boot/compressed/Makefile as both
> override them;
> - symbols starting with a dot (like ".Lrelocated") should be
> handled manually with SYM_*_START_SECT(.Lrelocated, relocated)
> as "two dots" is a special (and CPP doesn't want to concatenate
> two dots in general);
> - some symbols explicitly need to reside in one section (like
> kexec control code, hibernation page etc.);
> - macros creating aliases for functions (like __memcpy() for
> memcpy() etc.) should go after the main declaration (as
> aliases should be declared in the same section and they
> don't have SYM_PUSH_SECTION() inside);
> - things like ".org", ".align" should be manually pushed to
> the same section the next symbol goes to;
> - expand indirect_thunk wildcards in vmlinux.lds.S to catch
> symbols back into the "main" section;
> - inline ASM functions like __raw_callee*() should be pushed
> manually as well.
>
> With these changes and `-ffunction-sections` enabled, "plain"
> ".text" section is empty which means that everything works
> right as expected.
>
> Signed-off-by: Alexander Lobakin <alexandr.lobakin@xxxxxxxxx>
> ---
> arch/x86/Kconfig | 1 +
> arch/x86/boot/Makefile | 1 +
> arch/x86/boot/compressed/Makefile | 1 +
> arch/x86/boot/compressed/head_32.S | 2 +-
> arch/x86/boot/compressed/head_64.S | 32 ++++++++++++-------
> arch/x86/boot/pmjump.S | 2 +-
> arch/x86/crypto/aesni-intel_asm.S | 4 +--
> arch/x86/crypto/poly1305-x86_64-cryptogams.pl | 4 +++
> arch/x86/include/asm/paravirt.h | 2 ++
> arch/x86/include/asm/qspinlock_paravirt.h | 2 ++
> arch/x86/kernel/head_32.S | 4 +--
> arch/x86/kernel/head_64.S | 4 +--
> arch/x86/kernel/kprobes/core.c | 2 ++
> arch/x86/kernel/kvm.c | 2 ++
> arch/x86/kernel/relocate_kernel_32.S | 10 +++---
> arch/x86/kernel/relocate_kernel_64.S | 12 ++++---
> arch/x86/kernel/vmlinux.lds.S | 2 +-
> arch/x86/kvm/emulate.c | 7 +++-
> arch/x86/lib/copy_user_64.S | 2 +-
> arch/x86/lib/error-inject.c | 2 ++
> arch/x86/lib/getuser.S | 5 ++-
> arch/x86/lib/memcpy_64.S | 4 +--
> arch/x86/lib/memmove_64.S | 5 ++-
> arch/x86/lib/memset_64.S | 5 +--
> arch/x86/lib/putuser.S | 2 +-
> arch/x86/power/hibernate_asm_32.S | 10 +++---
> arch/x86/power/hibernate_asm_64.S | 10 +++---
> 27 files changed, 89 insertions(+), 50 deletions(-)

Urgh, how much of that can you avoid by (ab)using __DISABLE_EXPORTS
like:

https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git/commit/?h=x86/wip.ibt&id=ab74f54f2b1f6cfeaf2b3ba6999bde7cabada9ca