Re: [PATCH v4 3/4] KVM: x86/mmu: Move slot checks from __kvm_faultin_pfn() to kvm_faultin_pfn()

From: Sean Christopherson
Date: Tue Feb 20 2024 - 21:10:38 EST


On Tue, Feb 20, 2024, Yan Zhao wrote:
> On Mon, Feb 19, 2024 at 11:44:54AM -0800, Sean Christopherson wrote:
> > If KVM is using TDP, but L1 is using shadow paging for L2, then routing through
> > kvm_handle_noslot_fault() will incorrectly cache the gfn as MMIO, and create an
> > MMIO SPTE. Creating an MMIO SPTE is ok, but only because kvm_mmu_page_role.guest_mode
> > ensure KVM uses different roots for L1 vs. L2. But mmio_gfn will remain valid,
> > and could (quite theoretically) cause KVM to incorrectly treat an L1 access to
> > the private TSS or identity mapped page tables as MMIO.
> Why would KVM treat L1 access to the private TSS and identity mapped page
> tables as MMIO even though mmio_gfn is valid?

Because KVM doesn't need to take an EPT Violation or Misconfig to trigger emulation,
those just happen to be (by far) the most common ways KVM gets into the emulator
on modern CPUs.

> It looks that (for Intel platform) EPT for L1 will only install normal SPTEs
> (non-MMIO SPTEs) for the two private slots, so there would not have EPT
> misconfiguration and would not go to emulation path incorrectly.
> Am I missing something?

..

> > --
> > Subject: [PATCH] KVM: x86/mmu: Don't force emulation of L2 accesses to
> > non-APIC internal slots
> >
> > Signed-off-by: Sean Christopherson <seanjc@xxxxxxxxxx>
> > ---
> > arch/x86/kvm/mmu/mmu.c | 17 +++++++++++++----
> > 1 file changed, 13 insertions(+), 4 deletions(-)
> >
> > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> > index 488f522f09c6..4ce824cec5b9 100644
> > --- a/arch/x86/kvm/mmu/mmu.c
> > +++ b/arch/x86/kvm/mmu/mmu.c
> > @@ -4341,8 +4341,18 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> > if (slot && (slot->flags & KVM_MEMSLOT_INVALID))
> > return RET_PF_RETRY;
> >
> > - if (!kvm_is_visible_memslot(slot)) {
> > - /* Don't expose private memslots to L2. */
> > + if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT) {
> > + /*
> > + * Don't map L1's APIC access page into L2, KVM doesn't support
> > + * using APICv/AVIC to accelerate L2 accesses to L1's APIC,
> > + * i.e. the access needs to be emulated. Emulating access to
> > + * L1's APIC is also correct if L1 is accelerating L2's own
> > + * virtual APIC, but for some reason L1 also maps _L1's_ APIC
> > + * into L2. Note, vcpu_is_mmio_gpa() always treats access to
> > + * the APIC as MMIO. Allow an MMIO SPTE to be created, as KVM
> > + * uses different roots for L1 vs. L2, i.e. there is no danger
> > + * of breaking APICv/AVIC for L1.
> > + */
> > if (is_guest_mode(vcpu)) {
> > fault->slot = NULL;
> > fault->pfn = KVM_PFN_NOSLOT;
> Checking fault->is_private before calling kvm_handle_noslot_fault()?

Ya, the actual series will perform that check, this slots in halfway through.

> And do we need a centralized check of fault->is_private in kvm_mmu_do_page_fault()
> before returning RET_PF_EMULATE?

Oof, yes.

> > @@ -4355,8 +4365,7 @@ static int __kvm_faultin_pfn(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
> > * MMIO SPTE. That way the cache doesn't need to be purged
> > * when the AVIC is re-enabled.
> > */
> > - if (slot && slot->id == APIC_ACCESS_PAGE_PRIVATE_MEMSLOT &&
> > - !kvm_apicv_activated(vcpu->kvm))
> > + if (!kvm_apicv_activated(vcpu->kvm))
> > return RET_PF_EMULATE;
> Otherwise, here also needs a checking of fault->is_private?
> Maybe also for where RET_PF_EMULATE is returned when page_fault_handle_page_track()
> is true (though I know it's always false for TDX).

Ya, and practically speaking it should always be false for functional setups
(software-protected VMs don't yet play nice with shadow paging or any form of
emulation), but it's easy enough to guard against RET_PF_EMULATE in
kvm_mmu_do_page_fault().

I'm going to post _just_ patch 1 as v5 so that it can land in 6.8 (assuming I
don't screw it up again).

I'll post a separate series to tackle the refactor and is_private cleanups and
fixes as that has ballooned to 17 patches :-/