Re: [PATCH] KVM: X86: Boost queue head vCPU to mitigate lock waiter preemption

From: Paolo Bonzini
Date: Wed Jul 24 2019 - 08:17:20 EST


On 24/07/19 11:43, Wanpeng Li wrote:
> From: Wanpeng Li <wanpengli@xxxxxxxxxxx>
>
> Commit 11752adb (locking/pvqspinlock: Implement hybrid PV queued/unfair locks)
> introduces hybrid PV queued/unfair locks
> - queued mode (no starvation)
> - unfair mode (good performance on not heavily contended lock)
> The lock waiter goes into the unfair mode especially in VMs with over-commit
> vCPUs since increaing over-commitment increase the likehood that the queue
> head vCPU may have been preempted and not actively spinning.
>
> However, reschedule queue head vCPU timely to acquire the lock still can get
> better performance than just depending on lock stealing in over-subscribe
> scenario.
>
> Testing on 80 HT 2 socket Xeon Skylake server, with 80 vCPUs VM 80GB RAM:
> ebizzy -M
> vanilla boosting improved
> 1VM 23520 25040 6%
> 2VM 8000 13600 70%
> 3VM 3100 5400 74%
>
> The lock holder vCPU yields to the queue head vCPU when unlock, to boost queue
> head vCPU which is involuntary preemption or the one which is voluntary halt
> due to fail to acquire the lock after a short spin in the guest.

Clever! I have applied the patch.

Paolo

> Cc: Waiman Long <longman@xxxxxxxxxx>
> Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> Cc: Radim KrÄmÃÅ <rkrcmar@xxxxxxxxxx>
> Signed-off-by: Wanpeng Li <wanpengli@xxxxxxxxxxx>
> ---
> arch/x86/kvm/x86.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 01e18ca..c6d951c 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -7206,7 +7206,7 @@ static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
>
> rcu_read_unlock();
>
> - if (target)
> + if (target && READ_ONCE(target->ready))
> kvm_vcpu_yield_to(target);
> }
>
> @@ -7246,6 +7246,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> break;
> case KVM_HC_KICK_CPU:
> kvm_pv_kick_cpu_op(vcpu->kvm, a0, a1);
> + kvm_sched_yield(vcpu->kvm, a1);
> ret = 0;
> break;
> #ifdef CONFIG_X86_64
>