Re: [PATCH v18 040/121] KVM: x86/mmu: Disallow fast page fault on private GPA

From: Paolo Bonzini
Date: Mon Feb 12 2024 - 12:03:06 EST


On Tue, Jan 23, 2024 at 12:55 AM <isaku.yamahata@xxxxxxxxx> wrote:
>
> From: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
>
> TDX requires TDX SEAMCALL to operate Secure EPT instead of direct memory
> access and TDX SEAMCALL is heavy operation. Fast page fault on private GPA
> doesn't make sense. Disallow fast page fault on private GPA.
>
> Signed-off-by: Isaku Yamahata <isaku.yamahata@xxxxxxxxx>
> Reviewed-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
> ---
> arch/x86/kvm/mmu/mmu.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index b2924bd9b668..54d4c8f1ba68 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -3339,8 +3339,16 @@ static int kvm_handle_noslot_fault(struct kvm_vcpu *vcpu,
> return RET_PF_CONTINUE;
> }
>
> -static bool page_fault_can_be_fast(struct kvm_page_fault *fault)
> +static bool page_fault_can_be_fast(struct kvm *kvm, struct kvm_page_fault *fault)
> {
> + /*
> + * TDX private mapping doesn't support fast page fault because the EPT
> + * entry is read/written with TDX SEAMCALLs instead of direct memory
> + * access.
> + */
> + if (kvm_is_private_gpa(kvm, fault->addr))
> + return false;

I think this does not apply to SNP? If so, it would be better to check
the SPTE against the shared-page mask inside the do...while loop.

Paolo

> /*
> * Page faults with reserved bits set, i.e. faults on MMIO SPTEs, only
> * reach the common page fault handler if the SPTE has an invalid MMIO
> @@ -3450,7 +3458,7 @@ static int fast_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> u64 *sptep;
> uint retry_count = 0;
>
> - if (!page_fault_can_be_fast(fault))
> + if (!page_fault_can_be_fast(vcpu->kvm, fault))
> return ret;
>
> walk_shadow_page_lockless_begin(vcpu);
> --
> 2.25.1
>