Re: [PATCH v4 1/6] KVM: Fix leak vCPU's VMCS value into other pCPU

From: Wanpeng Li
Date: Mon Aug 05 2019 - 20:36:17 EST


On Tue, 6 Aug 2019 at 07:17, Paolo Bonzini <pbonzini@xxxxxxxxxx> wrote:
>
> On 05/08/19 04:03, Wanpeng Li wrote:
> > From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> >
> > After commit d73eb57b80b (KVM: Boost vCPUs that are delivering interrupts), a
> > five years old bug is exposed. Running ebizzy benchmark in three 80 vCPUs VMs
> > on one 80 pCPUs Skylake server, a lot of rcu_sched stall warning splatting
> > in the VMs after stress testing:
> >
> > INFO: rcu_sched detected stalls on CPUs/tasks: { 4 41 57 62 77} (detected by 15, t=60004 jiffies, g=899, c=898, q=15073)
> > Call Trace:
> > flush_tlb_mm_range+0x68/0x140
> > tlb_flush_mmu.part.75+0x37/0xe0
> > tlb_finish_mmu+0x55/0x60
> > zap_page_range+0x142/0x190
> > SyS_madvise+0x3cd/0x9c0
> > system_call_fastpath+0x1c/0x21
> >
> > swait_active() sustains to be true before finish_swait() is called in
> > kvm_vcpu_block(), voluntarily preempted vCPUs are taken into account
> > by kvm_vcpu_on_spin() loop greatly increases the probability condition
> > kvm_arch_vcpu_runnable(vcpu) is checked and can be true, when APICv
> > is enabled the yield-candidate vCPU's VMCS RVI field leaks(by
> > vmx_sync_pir_to_irr()) into spinning-on-a-taken-lock vCPU's current
> > VMCS.
> >
> > This patch fixes it by checking conservatively a subset of events.
> >
> > Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> > Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
> > Cc: Christian Borntraeger <borntraeger@xxxxxxxxxx>
> > Cc: Marc Zyngier <Marc.Zyngier@xxxxxxx>
> > Cc: stable@xxxxxxxxxxxxxxx
> > Fixes: 98f4a1467 (KVM: add kvm_arch_vcpu_runnable() test to kvm_vcpu_on_spin() loop)
> > Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> > ---
> > v3 -> v4:
> > * just test KVM_REQ_*
> > * rename the hook to apicv_has_pending_interrupt
> > * wrap with #ifdef CONFIG_KVM_ASYNC_PF
> > v2 -> v3:
> > * check conservatively a subset of events
> > v1 -> v2:
> > * checking swait_active(&vcpu->wq) for involuntary preemption
> >
> > arch/mips/kvm/mips.c | 5 +++++
> > arch/powerpc/kvm/powerpc.c | 5 +++++
> > arch/s390/kvm/kvm-s390.c | 5 +++++
> > arch/x86/include/asm/kvm_host.h | 1 +
> > arch/x86/kvm/svm.c | 6 ++++++
> > arch/x86/kvm/vmx/vmx.c | 6 ++++++
> > arch/x86/kvm/x86.c | 16 ++++++++++++++++
> > include/linux/kvm_host.h | 1 +
> > virt/kvm/arm/arm.c | 5 +++++
> > virt/kvm/kvm_main.c | 16 +++++++++++++++-
> > 10 files changed, 65 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> > index 2cfe839..95a4642 100644
> > --- a/arch/mips/kvm/mips.c
> > +++ b/arch/mips/kvm/mips.c
> > @@ -98,6 +98,11 @@ int kvm_arch_vcpu_runnable(struct kvm_vcpu *vcpu)
> > return !!(vcpu->arch.pending_exceptions);
> > }
> >
> > +bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
>
> Using a __weak definition for the default implementation is a bit more
> concise. Queued with that change.

Thank you, Paolo! Btw, how about other 5 patches?

Regards,
Wanpeng Li