Re: [PATCH 4.4 41/43] x86/mm/pat: Make set_memory_np() L1TF safe

From: Guenter Roeck
Date: Sun Sep 09 2018 - 13:06:34 EST


On 09/09/2018 09:46 AM, Ben Hutchings wrote:
On Tue, 2018-08-14 at 19:18 +0200, Greg Kroah-Hartman wrote:
4.4-stable review patch.ÂÂIf anyone has any objections, please let me know.

------------------

From: Andi Kleen <ak@xxxxxxxxxxxxxxx>

commit 958f79b9ee55dfaf00c8106ed1c22a2919e0028b upstream
[...]
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -378,12 +378,39 @@ static inline pmd_t pfn_pmd(unsigned lon
 return __pmd(pfn | massage_pgprot(pgprot));
Â}
+static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
+{
+ phys_addr_t pfn = page_nr << PAGE_SHIFT;
+ pfn ^= protnone_mask(pgprot_val(pgprot));
+ pfn &= PHYSICAL_PUD_PAGE_MASK;
+ return __pud(pfn | massage_pgprot(pgprot));
+}
[...]

This (and the backport to 4.9) are missing the fix from commit
e14d7dfb41f5 "x86/speculation/l1tf: Fix up pte->pfn conversion for
PAE", as that was applied earlier in the series.ÂÂBut since PAE implies
only 3-level paging I don't know how the PUD functions get used or
whether this actually matters.

Excellent find.

e14d7dfb41f5 (re-)applies cleanly to both 4.4.y and 4.9.y. Since most of its
changes are already applied, the only remaining change is

diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index c535012bdb56..5736306bdaab 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -420,7 +420,7 @@ static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)

static inline pud_t pfn_pud(unsigned long page_nr, pgprot_t pgprot)
{
- phys_addr_t pfn = page_nr << PAGE_SHIFT;
+ phys_addr_t pfn = (phys_addr_t)page_nr << PAGE_SHIFT;
pfn ^= protnone_mask(pgprot_val(pgprot));
pfn &= PHYSICAL_PUD_PAGE_MASK;
return __pud(pfn | massage_pgprot(pgprot));

after cherry-picking it into both branches.

I think we should re-apply it to both 4.4.y and 4.9.y be on the safe side.

Thanks,
Guenter