[PATCH bpf 2/3] x86/mm: Disallow vsyscall page read for copy_from_kernel_nofault()

From: Hou Tao
Date: Fri Jan 19 2024 - 02:29:53 EST


From: Hou Tao <houtao1@xxxxxxxxxx>

When trying to use copy_from_kernel_nofault() to read vsyscall page
through a bpf program, the following oops was reported:

BUG: unable to handle page fault for address: ffffffffff600000
#PF: supervisor read access in kernel mode
#PF: error_code(0x0000) - not-present page
PGD 3231067 P4D 3231067 PUD 3233067 PMD 3235067 PTE 0
Oops: 0000 [#1] PREEMPT SMP PTI
CPU: 1 PID: 20390 Comm: test_progs ...... 6.7.0+ #58
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996) ......
RIP: 0010:copy_from_kernel_nofault+0x6f/0x110
......
Call Trace:
<TASK>
? copy_from_kernel_nofault+0x6f/0x110
bpf_probe_read_kernel+0x1d/0x50
bpf_prog_2061065e56845f08_do_probe_read+0x51/0x8d
trace_call_bpf+0xc5/0x1c0
perf_call_bpf_enter.isra.0+0x69/0xb0
perf_syscall_enter+0x13e/0x200
syscall_trace_enter+0x188/0x1c0
do_syscall_64+0xb5/0xe0
entry_SYSCALL_64_after_hwframe+0x6e/0x76
</TASK>
......
---[ end trace 0000000000000000 ]---

The oops happens as follows: A bpf program uses bpf_probe_read_kernel()
to read from vsyscall page, bpf_probe_read_kernel() invokes
copy_from_kernel_nofault() in turn and then invokes __get_user_asm(). A
page fault exception is triggered accordingly, but handle_page_fault()
considers the vsyscall page address as a userspace address instead of
a kernel space address, so the fix-up set-up by bpf isn't applied.
Because the exception happens in kernel space and page fault handling is
disabled, page_fault_oops() is invoked and an oops happens.

Fix it by disallowing vsyscall page read for copy_from_kernel_nofault().

Originally-from: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Reported-by: syzbot+72aa0161922eba61b50e@xxxxxxxxxxxxxxxxxxxxxxxxx
Closes: https://lore.kernel.org/bpf/CAG48ez06TZft=ATH1qh2c5mpS5BT8UakwNkzi6nvK5_djC-4Nw@xxxxxxxxxxxxxx
Reported-by: xingwei lee <xrivendell7@xxxxxxxxx>
Closes: https://lore.kernel.org/bpf/CABOYnLynjBoFZOf3Z4BhaZkc5hx_kHfsjiW+UWLoB=w33LvScw@xxxxxxxxxxxxxx
Signed-off-by: Hou Tao <houtao1@xxxxxxxxxx>
---
arch/x86/mm/maccess.c | 6 ++++++
1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/maccess.c b/arch/x86/mm/maccess.c
index 6993f026adec9..bb454e0abbfcf 100644
--- a/arch/x86/mm/maccess.c
+++ b/arch/x86/mm/maccess.c
@@ -3,6 +3,8 @@
#include <linux/uaccess.h>
#include <linux/kernel.h>

+#include "mm_internal.h"
+
#ifdef CONFIG_X86_64
bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
{
@@ -15,6 +17,10 @@ bool copy_from_kernel_nofault_allowed(const void *unsafe_src, size_t size)
if (vaddr < TASK_SIZE_MAX + PAGE_SIZE)
return false;

+ /* vsyscall page is also considered as userspace address. */
+ if (is_vsyscall_vaddr(vaddr))
+ return false;
+
/*
* Allow everything during early boot before 'x86_virt_bits'
* is initialized. Needed for instruction decoding in early
--
2.29.2