Re: [PATCH v6 2/4] KVM: X86: Add Paravirt TLB Shootdown

From: Radim KrÄmÃÅ
Date: Wed Nov 29 2017 - 11:21:29 EST


2017-11-27 20:05-0800, Wanpeng Li:
> From: Wanpeng Li <wanpeng.li@xxxxxxxxxxx>
>
> Remote flushing api's does a busy wait which is fine in bare-metal
> scenario. But with-in the guest, the vcpus might have been pre-empted
> or blocked. In this scenario, the initator vcpu would end up
> busy-waiting for a long amount of time.
>
> This patch set implements para-virt flush tlbs making sure that it
> does not wait for vcpus that are sleeping. And all the sleeping vcpus
> flush the tlb on guest enter.
>
> The best result is achieved when we're overcommiting the host by running
> multiple vCPUs on each pCPU. In this case PV tlb flush avoids touching
> vCPUs which are not scheduled and avoid the wait on the main CPU.
>
> Testing on a Xeon Gold 6142 2.6GHz 2 sockets, 32 cores, 64 threads,
> so 64 pCPUs, and each VM is 64 vCPUs.
>
> ebizzy -M
> vanilla optimized boost
> 1VM 46799 48670 4%
> 2VM 23962 42691 78%
> 3VM 16152 37539 132%
>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Signed-off-by: Wanpeng Li <wanpeng.li@xxxxxxxxxxx>
> ---
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> @@ -498,6 +498,37 @@ static void __init kvm_apf_trap_init(void)
> update_intr_gate(X86_TRAP_PF, async_page_fault);
> }
>
> +static DEFINE_PER_CPU(cpumask_t, __pv_tlb_mask);
> +
> +static void kvm_flush_tlb_others(const struct cpumask *cpumask,
> + const struct flush_tlb_info *info)
> +{
> + u8 state;
> + int cpu;
> + struct kvm_steal_time *src;
> + cpumask_t *flushmask = &per_cpu(__pv_tlb_mask, smp_processor_id());
> +
> + if (unlikely(!flushmask))
> + return;

I don't see how this can be NULL and if it could, we'd have to call
native_flush_tlb_others() instead of returning anyway.

Also, Peter mentioned that we're wasting memory (default is 1k per CPU)
when not running on KVM. Hyper-V hijacks x86_platform.apic_post_init()
to achieve late allocation. smp_ops.smp_prepare_cpus seems slightly
better for our purposes, but I don't really like either.

Couldn't we use use arch_initcall(), or early_initcall() if there are
complications with allocating after smp_init()?

Thanks.