[tip: x86/alternatives] x86/alternative: Consistently patch SMP locks in vmlinux and modules

From: tip-bot2 for Julian Pidancet
Date: Tue Nov 22 2022 - 07:28:33 EST


The following commit has been merged into the x86/alternatives branch of tip:

Commit-ID: 0b1a2171551e05c45ad999e403272ddafa7286c8
Gitweb: https://git.kernel.org/tip/0b1a2171551e05c45ad999e403272ddafa7286c8
Author: Julian Pidancet <julian.pidancet@xxxxxxxxxx>
AuthorDate: Thu, 27 Oct 2022 22:49:06 +02:00
Committer: Borislav Petkov <bp@xxxxxxx>
CommitterDate: Tue, 22 Nov 2022 13:13:55 +01:00

x86/alternative: Consistently patch SMP locks in vmlinux and modules

alternatives_smp_module_add() restricts patching of SMP lock prefixes to
the text address range passed as an argument.

For vmlinux, patching all the instructions located between the _text and
_etext symbols is allowed. That includes the .text section but also
other sections such as .text.hot and .text.unlikely.

As per the comment inside the 'struct smp_alt_module' definition, the
original purpose of this restriction is to avoid patching the init code
because in the case when one boots with a single CPU, the LOCK prefixes
to the locking primitives are removed.

Later on, when other CPUs are onlined, those LOCK prefixes get added
back in but by that time the .init code is very likely removed so
patching that would be a bad idea.

For modules, the current code only allows patching instructions located
inside the .text segment, excluding other sections such as .text.hot or
.text.unlikely, which may need patching.

Make patching of the kernel core and modules more consistent by
allowing all text sections of modules except .init.text to be patched in
module_finalize().

For that, use mod->core_layout.base/mod->core_layout.text_size as the
address range allowed to be patched, which include all the code sections
except the init code.

[ bp: Massage and expand commit message. ]

Signed-off-by: Julian Pidancet <julian.pidancet@xxxxxxxxxx>
Signed-off-by: Borislav Petkov <bp@xxxxxxx>
Acked-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Link: https://lore.kernel.org/r/20221027204906.511277-1-julian.pidancet@xxxxxxxxxx
---
arch/x86/kernel/module.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index c032edc..b1e6e45 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -251,14 +251,12 @@ int module_finalize(const Elf_Ehdr *hdr,
const Elf_Shdr *sechdrs,
struct module *me)
{
- const Elf_Shdr *s, *text = NULL, *alt = NULL, *locks = NULL,
- *para = NULL, *orc = NULL, *orc_ip = NULL,
- *retpolines = NULL, *returns = NULL, *ibt_endbr = NULL;
+ const Elf_Shdr *s, *alt = NULL, *locks = NULL, *para = NULL,
+ *orc = NULL, *orc_ip = NULL, *retpolines = NULL,
+ *returns = NULL, *ibt_endbr = NULL;
char *secstrings = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;

for (s = sechdrs; s < sechdrs + hdr->e_shnum; s++) {
- if (!strcmp(".text", secstrings + s->sh_name))
- text = s;
if (!strcmp(".altinstructions", secstrings + s->sh_name))
alt = s;
if (!strcmp(".smp_locks", secstrings + s->sh_name))
@@ -302,12 +300,13 @@ int module_finalize(const Elf_Ehdr *hdr,
void *iseg = (void *)ibt_endbr->sh_addr;
apply_ibt_endbr(iseg, iseg + ibt_endbr->sh_size);
}
- if (locks && text) {
+ if (locks) {
void *lseg = (void *)locks->sh_addr;
- void *tseg = (void *)text->sh_addr;
+ void *text = me->core_layout.base;
+ void *text_end = text + me->core_layout.text_size;
alternatives_smp_module_add(me, me->name,
lseg, lseg + locks->sh_size,
- tseg, tseg + text->sh_size);
+ text, text_end);
}

if (orc && orc_ip)