Re: [PATCH 15/24] x86/mm: Allow flushing for future ASID switches

From: Dave Hansen
Date: Tue Nov 28 2017 - 13:13:40 EST


Thanks for looking at this, Peter. I've been resisting doing this for a
bit and it's an embarrassingly small amount of code.

On 11/28/2017 08:39 AM, Peter Zijlstra wrote:
> @@ -220,7 +221,21 @@ For 32-bit we have the following conventions - kernel is built with
> .macro SWITCH_TO_USER_CR3 scratch_reg:req
> STATIC_JUMP_IF_FALSE .Lend_\@, kaiser_enabled_key, def=1
> mov %cr3, \scratch_reg
> - ADJUST_USER_CR3 \scratch_reg
> + push \scratch_reg

Do we have a good stack in all the spots that we need to do this? It
may have changed with the trampoline stack, but I'm 100% sure that it
wasn't so in the recent past.

Let me see if I'm reading the assembly right.

Load the kernel's ASID from CR3 into \scratch_reg:

> + andq $(0x7FF), \scratch_reg

See if that ASID needs a flush by checking its bit in __asid_flush.
Store value of the bit in CF:

> + bt \scratch_reg, PER_CPU_VAR(__asid_flush)

Jump if CF bit is clear:

> + jnc .Lnoflush_\@

Clear the ASID bit from __asid_flush since we are about to do the flush:

> + btr \scratch_reg, PER_CPU_VAR(__asid_flush)

Restore CR3 back to what it was:

> + pop \scratch_reg

Jump past the code that sets the no-flush bit (63), forcing a flush:

> + jmp .Ldo_\@
> +
> +.Lnoflush_\@:
> + pop \scratch_reg
> + ALTERNATIVE "", "bts $63, \scratch_reg", X86_FEATURE_PCID
> +
> +.Ldo_\@:
> + orq $(KAISER_SWITCH_MASK), \scratch_reg
> mov \scratch_reg, %cr3
> .Lend_\@:
> .endm



> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 27eb7e8c5e84..1fb137da4c9f 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -9,6 +9,7 @@
> #include <asm/cpufeature.h>
> #include <asm/special_insns.h>
> #include <asm/smp.h>
> +#include <asm/kaiser.h>
>
> static inline void __invpcid(unsigned long pcid, unsigned long addr,
> unsigned long type)
> @@ -347,9 +348,33 @@ static inline void cr4_set_bits_and_update_boot(unsigned long mask)
>
> extern void initialize_tlbstate_and_flush(void);
>
> +DECLARE_PER_CPU(unsigned long, __asid_flush);

Could we spare enough space to make this something like
user_asid_flush_pending_mask?

It took me a minute to realize that it was a mask. Also, since we only
have 6 asids, should we bit a bit more stingy with the type?

> +/*
> + * Given an asid, flush the corresponding KAISER user ASID.
> + */
> +static inline void flush_user_asid(u16 asid)
> +{
> + /* There is no user ASID if KAISER is off */
> + if (!IS_ENABLED(CONFIG_KAISER))
> + return;
> + /*
> + * We only have a single ASID if PCID is off and the CR3
> + * write will have flushed it.
> + */
> + if (!cpu_feature_enabled(X86_FEATURE_PCID))
> + return;
> +
> + if (!kaiser_enabled)
> + return;
> +
> + __set_bit(kern_asid(asid), this_cpu_ptr(&__asid_flush));
> +}

We flush_user_asid() and thus set bits in __asid_flush in two cases:

1. When we flush the TLB explicitly
2. When we re-use an ASID for a new mm

It took me a minute to realize that mixing these is still OK, even if
the mm associated with the ASID changes. It's because once the ASID is
stale, it doesn't matter *why* it is stale. Just that the next guy who
*uses* it needs to do the flush. You can do 1,000 tlb flushes, a
context switch, a tlb flush and another context switch, but if you only
go out to userspace once, you only need 1 ASID flush. That fits
perfectly with this bit that gets set a bunch of times and only cleared
once at exit to userspace.

IOW, this all seems sane, but it took me a few minutes of staring at it
to come to that conclusion.