[PATCH v3] x86: disable non-instrumented version of copy_page when KMSAN is enabled

From: Tetsuo Handa
Date: Fri Mar 01 2024 - 17:41:52 EST


Commit afb2d666d025 ("zsmalloc: use copy_page for full page copy") caused
a false-positive KMSAN warning. Implement copy_page() using memcpy() when
KMSAN is enabled.

BUG: KMSAN: use-after-free in obj_malloc+0x6cc/0x7b0
Uninit was stored to memory at:
obj_malloc+0x70a/0x7b0
Uninit was created at:
free_unref_page_prepare+0x130/0xfc0

Signed-off-by: Tetsuo Handa <penguin-kernel@xxxxxxxxxxxxxxxxxxx>
---
Changes in v3:
Update description/comment and use include file, suggested by
Thomas Gleixner.

arch/x86/include/asm/page_64.h | 12 ++++++++++++
1 file changed, 12 insertions(+)

diff --git a/arch/x86/include/asm/page_64.h b/arch/x86/include/asm/page_64.h
index cc6b8e087192..98a71727dbac 100644
--- a/arch/x86/include/asm/page_64.h
+++ b/arch/x86/include/asm/page_64.h
@@ -9,6 +9,7 @@
#include <asm/alternative.h>

#include <linux/kmsan-checks.h>
+#include <linux/string.h>

/* duplicated to the one in bootmem.h */
extern unsigned long max_pfn;
@@ -58,7 +59,18 @@ static inline void clear_page(void *page)
: "cc", "memory", "rax", "rcx");
}

+#ifdef CONFIG_KMSAN
+/*
+ * The assembly version of copy_page() is not instrumented and
+ * therefore breaks KMSAN. Fall back to memcpy.
+ */
+static inline void copy_page(void *to, void *from)
+{
+ memcpy(to, from, PAGE_SIZE);
+}
+#else
void copy_page(void *to, void *from);
+#endif

#ifdef CONFIG_X86_5LEVEL
/*
--
2.34.1