Re: [PATCH] x86/mm/kaiser: Flush the correct ASID in __native_flush_tlb_single()

From: Dave Hansen
Date: Wed Nov 29 2017 - 10:21:30 EST


On 11/29/2017 06:35 AM, Peter Zijlstra wrote:
>> @@ -451,6 +474,9 @@ static inline void __native_flush_tlb_si
>> if (kern_asid(loaded_mm_asid) != user_asid(loaded_mm_asid))
>> invpcid_flush_one(user_asid(loaded_mm_asid), addr);
>> invpcid_flush_one(kern_asid(loaded_mm_asid), addr);
>> +
>> + /* Check that we are flushing the active ASID: */
>> + VM_WARN_ON_ONCE(kern_asid(loaded_mm_asid) != cr3_asid());
>> }
>
> Can't we do this differently (after my recent patches)? It appears to me
> we can unconditionally do INVLPG to shoot down the kernel mapping, and
> then, depending on INVPCID support we can either use that to shoot down
> a single page or simply invalidate the entire user mapping.

Yes, that works. Also, as I think about it, INVLPG is a safer
(bug-resistant) instruction to use too. INVPCID _can_ get the current
(kernel) ASID wrong, as we saw. But INVLPG always uses the current one
and can't be wrong about flushing the *current* ASID.

I think Andy measured it to be faster than INVPCID too.

So, maybe we should just remove INVPCID's use entirely.

> arch/x86/include/asm/tlbflush.h | 23 +++++++----------------
> 1 file changed, 7 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 481d5094559e..9587722162ee 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -438,29 +438,20 @@ static inline void __native_flush_tlb_single(unsigned long addr)
> {
> u32 loaded_mm_asid = this_cpu_read(cpu_tlbstate.loaded_mm_asid);
>
> + asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
> +
> + if (!kaiser_enabled)
> + return;
> +
> /*
> * Some platforms #GP if we call invpcid(type=1/2) before
> * CR4.PCIDE=1. Just call invpcid in the case we are called
> * early.
> */
> - if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE)) {
> + if (!this_cpu_has(X86_FEATURE_INVPCID_SINGLE))
> flush_user_asid(loaded_mm_asid);
> - asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
> - return;
> - }
> - /* Flush the address out of both PCIDs. */
> - /*
> - * An optimization here might be to determine addresses
> - * that are only kernel-mapped and only flush the kernel
> - * ASID. But, userspace flushes are probably much more
> - * important performance-wise.
> - *
> - * Make sure to do only a single invpcid when KAISER is
> - * disabled and we have only a single ASID.
> - */
> - if (kern_asid(loaded_mm_asid) != user_asid(loaded_mm_asid))
> + else
> invpcid_flush_one(user_asid(loaded_mm_asid), addr);
> - invpcid_flush_one(kern_asid(loaded_mm_asid), addr);
> }
>
> static inline void __flush_tlb_all(void)
>