[tip: x86/cleanups] x86/docs: Remove reference to syscall trampoline in PTI

From: tip-bot2 for Nikolay Borisov
Date: Tue Dec 12 2023 - 09:06:05 EST


The following commit has been merged into the x86/cleanups branch of tip:

Commit-ID: 7a0a6d55ed93fe064039c4e014d5cf3a97391bbb
Gitweb: https://git.kernel.org/tip/7a0a6d55ed93fe064039c4e014d5cf3a97391bbb
Author: Nikolay Borisov <nik.borisov@xxxxxxxx>
AuthorDate: Thu, 02 Nov 2023 15:02:04 +02:00
Committer: Borislav Petkov (AMD) <bp@xxxxxxxxx>
CommitterDate: Tue, 12 Dec 2023 14:43:59 +01:00

x86/docs: Remove reference to syscall trampoline in PTI

Commit

bf904d2762ee ("x86/pti/64: Remove the SYSCALL64 entry trampoline")

removed the syscall trampoline and instead opted to enable using the
default SYSCALL64 entry point by mapping the percpu TSS. Unfortunately,
the PTI documentation wasn't updated when the respective changes were
made, so bring the doc up to speed.

Signed-off-by: Nikolay Borisov <nik.borisov@xxxxxxxx>
Signed-off-by: Borislav Petkov (AMD) <bp@xxxxxxxxx>
Link: https://lore.kernel.org/r/20231102130204.41043-1-nik.borisov@xxxxxxxx
---
Documentation/arch/x86/pti.rst | 10 ++++------
1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/Documentation/arch/x86/pti.rst b/Documentation/arch/x86/pti.rst
index 4b858a9..e08d351 100644
--- a/Documentation/arch/x86/pti.rst
+++ b/Documentation/arch/x86/pti.rst
@@ -81,11 +81,9 @@ this protection comes at a cost:
and exit (it can be skipped when the kernel is interrupted,
though.) Moves to CR3 are on the order of a hundred
cycles, and are required at every entry and exit.
- b. A "trampoline" must be used for SYSCALL entry. This
- trampoline depends on a smaller set of resources than the
- non-PTI SYSCALL entry code, so requires mapping fewer
- things into the userspace page tables. The downside is
- that stacks must be switched at entry time.
+ b. Percpu TSS is mapped into the user page tables to allow SYSCALL64 path
+ to work under PTI. This doesn't have a direct runtime cost but it can
+ be argued it opens certain timing attack scenarios.
c. Global pages are disabled for all kernel structures not
mapped into both kernel and userspace page tables. This
feature of the MMU allows different processes to share TLB
@@ -167,7 +165,7 @@ that are worth noting here.
* Failures of the selftests/x86 code. Usually a bug in one of the
more obscure corners of entry_64.S
* Crashes in early boot, especially around CPU bringup. Bugs
- in the trampoline code or mappings cause these.
+ in the mappings cause these.
* Crashes at the first interrupt. Caused by bugs in entry_64.S,
like screwing up a page table switch. Also caused by
incorrectly mapping the IRQ handler entry code.