[tip:x86/mm] x86/kasan: Clear kasan_zero_page after TLB flush

From: tip-bot for Andrey Ryabinin
Date: Tue Feb 09 2016 - 11:08:17 EST


Commit-ID: 69e0210fd01ff157d332102219aaf5c26ca8069b
Gitweb: http://git.kernel.org/tip/69e0210fd01ff157d332102219aaf5c26ca8069b
Author: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx>
AuthorDate: Mon, 11 Jan 2016 15:51:18 +0300
Committer: Ingo Molnar <mingo@xxxxxxxxxx>
CommitDate: Tue, 9 Feb 2016 13:33:14 +0100

x86/kasan: Clear kasan_zero_page after TLB flush

Currently we clear kasan_zero_page before __flush_tlb_all(). This
works with current implementation of native_flush_tlb[_global]()
because it doesn't cause do any writes to kasan shadow memory.
But any subtle change made in native_flush_tlb*() could break this.
Also current code seems doesn't work for paravirt guests (lguest).

Only after the TLB flush we can be sure that kasan_zero_page is not
used as early shadow anymore (instrumented code will not write to it).
So it should cleared it only after the TLB flush.

Signed-off-by: Andrey Ryabinin <aryabinin@xxxxxxxxxxxxx>
Reviewed-by: Borislav Petkov <bp@xxxxxxx>
Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxxxxxx>
Cc: Andy Lutomirski <luto@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Brian Gerst <brgerst@xxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: Denys Vlasenko <dvlasenk@xxxxxxxxxx>
Cc: H. Peter Anvin <hpa@xxxxxxxxx>
Cc: Linus Torvalds <torvalds@xxxxxxxxxxxxxxxxxxxx>
Cc: Luis R. Rodriguez <mcgrof@xxxxxxxx>
Cc: Oleg Nesterov <oleg@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Toshi Kani <toshi.kani@xxxxxx>
Cc: linux-mm@xxxxxxxxx
Link: http://lkml.kernel.org/r/1452516679-32040-2-git-send-email-aryabinin@xxxxxxxxxxxxx
Signed-off-by: Ingo Molnar <mingo@xxxxxxxxxx>
---
arch/x86/mm/kasan_init_64.c | 11 ++++++++---
1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/kasan_init_64.c b/arch/x86/mm/kasan_init_64.c
index d470cf2..303e470 100644
--- a/arch/x86/mm/kasan_init_64.c
+++ b/arch/x86/mm/kasan_init_64.c
@@ -120,11 +120,16 @@ void __init kasan_init(void)
kasan_populate_zero_shadow(kasan_mem_to_shadow((void *)MODULES_END),
(void *)KASAN_SHADOW_END);

- memset(kasan_zero_page, 0, PAGE_SIZE);
-
load_cr3(init_level4_pgt);
__flush_tlb_all();
- init_task.kasan_depth = 0;

+ /*
+ * kasan_zero_page has been used as early shadow memory, thus it may
+ * contain some garbage. Now we can clear it, since after the TLB flush
+ * no one should write to it.
+ */
+ memset(kasan_zero_page, 0, PAGE_SIZE);
+
+ init_task.kasan_depth = 0;
pr_info("KernelAddressSanitizer initialized\n");
}