Re: [RFC PATCH v2 4/6] KVM: x86: Introduce fault type to indicate kvm page fault is private

From: Michael Roth
Date: Sun Jun 25 2023 - 21:10:25 EST


On Fri, Jun 23, 2023 at 01:04:03PM -0700, Sean Christopherson wrote:
> On Thu, Jun 22, 2023, isaku.yamahata@xxxxxxxxx wrote:
> > diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> > index 7f9ec1e5b136..0ec0b927a391 100644
> > --- a/arch/x86/kvm/mmu/mmu_internal.h
> > +++ b/arch/x86/kvm/mmu/mmu_internal.h
> > @@ -188,6 +188,13 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm)
> > return READ_ONCE(nx_huge_pages) && !kvm->arch.disable_nx_huge_pages;
> > }
> >
> > +enum kvm_fault_type {
> > + KVM_FAULT_MEM_ATTR,
> > + KVM_FAULT_SHARED,
> > + KVM_FAULT_SHARED_ALWAYS,
> > + KVM_FAULT_PRIVATE,
>
> This is silly. Just use AMD's error code bit, i.e. PFERR_GUEST_ENC_MASK as per
> the SNP series.
>
> Bit 34 (ENC): Set to 1 if the guest’s effective C-bit was 1, 0 otherwise.
>
> Just because Intel's ucode is too crusty to support error codes larger than 16
> bits doesn't mean KVM can't utilize the bits :-) KVM already translates to AMD's
> error codes for other things, e.g.
>
> error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
> PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
>
> For TDX, handle_ept_violation() can do something like:
>
> if (is_tdx(vcpu->kvm))
> error_code |= (gpa & shared) ? 0 : PFERR_GUEST_ENC_MASK;
> else if (kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(gpa)))
> error_code |= PFERR_GUEST_ENC_MASK;

Maybe this is what you're getting at below, but seems awkward to have this
being handling in TDX code since that would suggest that SVM module would
also need to duplicate that logic and synthesize this PFERR_GUEST_ENC_MASK
bit for non-SNP VMs (e.g. gmem self-tests).

So maybe SNP/TDX can rely on passing this via error_code, and then some
common code, like kvm_mem_is_private(), can handle this for non-TDX/SNP
guest types. But the current gmem series does this via a new .is_private
in kvm_page_fault:

.is_private = kvm_mem_is_private(vcpu->kvm, cr2_or_gpa >> PAGE_SHIFT),

This seems at odds with the idea of storing this 'fault->is_private'
logic into the error_code. Isaku and I were discussing[1] that we
should do one or the other:

a) store everything in error_code
b) use the .is_private field that seems to have been introduced to
track exactly this information

So I think this series is attempting to do b). If you're suggesting we
should do a), then what purpose is fault->is_private field meant to
serve? Are you planning to get rid of it? Otherwise it seems redundant.

But I think approach b) is useful for another reason:

>
> And that's not even taking into account that TDX might have a separate entry point,
> i.e. the "is_tdx()" check can probably be avoided.
>
> As for optimizing kvm_mem_is_private() to avoid unnecessary xarray lookups, that
> can and should be done separately, e.g.
>
> static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
> {
> return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) &&
> kvm_guest_has_private_mem(kvm) &&
> kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
> }
>
> where x86's implementation of kvm_guest_has_private_mem() can be
>
> #define kvm_guest_has_private_mem(kvm) (!!(kvm)->vm_type)

It's just about not checking xarray for non-protected case. The
optimization here is that neither TDX nor SNP care about the xarray as
far as determining whether the *fault* was a private or not. We only
care later, in part of the KVM MMU code that determines whether the
fault type is at odds with the xarray state and whether to generate a
KVM_EXIT_MEMORY_FAULT as a result.

In that code, both TDX/SNP, as well as gmem self-tests, will all end up
calling kvm_mem_is_private().

In the gmem self-test case, in current gmem base series, and I think with
what you've proposed here, we'd check the xarray via kvm_mem_is_privae(),
both in kvm_mmu_do_page_fault(), as well as later kvm_faultin_pfn() where
KVM_EXIT_MEMORY_FAULT case is handled. That seems off because:

1) Checking in kvm_mem_is_private() via kvm_mmu_do_page_fault() means
that we will grab the value prior to when the KVM MMU records the
mmu_invalidate_seq, which means there's a window between
kvm_mmu_do_page_fault() and kvm_faultin_pfn() where an updated
xarray won't be noticed, and the #NPF retry logic will not get
triggered.

2) For gmem self-test, kvm_mem_is_private() is the authority on
whether the fault is private or not. There's no need to distinguish
between what was set via KVM_SET_MEMORY_ATTRIBUTES, vs. what was
indicated via fault flags/GPA like TDX/SNP do.

So it makes sense, in the gmem case (and TDX/SNP), to defer the
kvm_mem_is_private() till later in kvm_faultin_pfn(). It avoid a
duplicate lookup, and a race. But .is_private only conveys
encrypted/unencrypted fault in TDX/SNP case, it doesn't have a way to
cleanly convey this case "just use whatever kvm_mem_is_private() reports
later, because it will always be what the VMM set, and it's too early
to check kvm_mem_is_private() right now".

So that's where this enum type came from. Although in the discussion I
linked to above I suggested just:

enum kvm_fault_type {
KVM_FAULT_VMM_DEFINED,
KVM_FAULT_SHARED,
KVM_FAULT_PRIVATE,

Which I think would work just as well, since VMM_DEFINED is basically what
MEM_ATTR case handles in this patch, and if you also add the
kvm_guest_has_private_mem() check into kvm_mem_is_private() as you suggested,
it would naturally cover the SHARED_ALWAYS case as well.

I'm fine with other approaches, but think there are a couple good reasons
(#1 and #2 above) for what Isaku is proposing, and any alternative
approach should account for them as well.

[1] https://lore.kernel.org/kvm/20230612042559.375660-1-michael.roth@xxxxxxx/T/#mdb24fe4998aa3ff3e568e8a0cba4d12b159087c7

-Mike