Re: [PATCH 2/4] KVM: x86: Rely solely on preempted_in_kernel flag for directed yield

From: Sean Christopherson
Date: Wed Jan 10 2024 - 12:13:42 EST


On Wed, Jan 10, 2024, Yuan Yao wrote:
> On Tue, Jan 09, 2024 at 04:39:36PM -0800, Sean Christopherson wrote:
> > @@ -13093,7 +13092,7 @@ bool kvm_arch_dy_has_pending_interrupt(struct kvm_vcpu *vcpu)
> >
> > bool kvm_arch_vcpu_preempted_in_kernel(struct kvm_vcpu *vcpu)
> > {
> > - return kvm_arch_vcpu_in_kernel(vcpu);
> > + return vcpu->arch.preempted_in_kernel;
> > }
> >
> > bool kvm_arch_dy_runnable(struct kvm_vcpu *vcpu)
> > @@ -13116,9 +13115,6 @@ bool kvm_arch_vcpu_in_kernel(struct kvm_vcpu *vcpu)
> > if (vcpu->arch.guest_state_protected)
> > return true;
> >
> > - if (vcpu != kvm_get_running_vcpu())
> > - return vcpu->arch.preempted_in_kernel;
> > -
>
> Now this function accepts vcpu parameter but can only get information from
> "current" vcpu loaded on hardware for VMX. I'm not sure whether need
> "WARN_ON(vcpu != kvm_get_running_vcpu())" here to guard it. i.e.
> kvm_guest_state() still uses this function (although it did chekcing before).

Eh, I don't think it's worth adding a one-off kvm_get_running_vcpu() sanity check.
In the vast majority of cases, if VMREAD or VMWRITE is used improperly, the
instruction will fail at some point due to the pCPU not having any VMCS loaded.
It's really just cross-vCPU checks that could silently do the wrong thing, and
those flows are so few and far between that I'm comfortable taking a "just get
it right stance".

If we want to add sanity checks, I think my vote would be to plumb @vcpu down
into vmcs_read{16,32,64,l} and add sanity checks there, probably with some sort
of guard so that the sanity checks can be enabled only for debug kernels.