Re: [PATCH RFC v1 0/9] KVM: SVM: Defer page pinning for SEV guests

From: Marc Orr
Date: Fri Apr 01 2022 - 13:29:16 EST


On Thu, Mar 31, 2022 at 12:01 PM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Thu, Mar 31, 2022, Peter Gonda wrote:
> > On Wed, Mar 30, 2022 at 10:48 PM Nikunj A. Dadhania <nikunj@xxxxxxx> wrote:
> > > On 3/31/2022 1:17 AM, Sean Christopherson wrote:
> > > > On Wed, Mar 30, 2022, Nikunj A. Dadhania wrote:
> > > >> On 3/29/2022 2:30 AM, Sean Christopherson wrote:
> > > >>> Let me preface this by saying I generally like the idea and especially the
> > > >>> performance, but...
> > > >>>
> > > >>> I think we should abandon this approach in favor of committing all our resources
> > > >>> to fd-based private memory[*], which (if done right) will provide on-demand pinning
> > > >>> for "free".
> > > >>
> > > >> I will give this a try for SEV, was on my todo list.
> > > >>
> > > >>> I would much rather get that support merged sooner than later, and use
> > > >>> it as a carrot for legacy SEV to get users to move over to its new APIs, with a long
> > > >>> term goal of deprecating and disallowing SEV/SEV-ES guests without fd-based private
> > > >>> memory.
> > > >>
> > > >>> That would require guest kernel support to communicate private vs. shared,
> > > >>
> > > >> Could you explain this in more detail? This is required for punching hole for shared pages?
> > > >
> > > > Unlike SEV-SNP, which enumerates private vs. shared in the error code, SEV and SEV-ES
> > > > don't provide private vs. shared information to the host (KVM) on page fault. And
> > > > it's even more fundamental then that, as SEV/SEV-ES won't even fault if the guest
> > > > accesses the "wrong" GPA variant, they'll silent consume/corrupt data.
> > > >
> > > > That means KVM can't support implicit conversions for SEV/SEV-ES, and so an explicit
> > > > hypercall is mandatory. SEV doesn't even have a vendor-agnostic guest/host paravirt
> > > > ABI, and IIRC SEV-ES doesn't provide a conversion/map hypercall in the GHCB spec, so
> > > > running a SEV/SEV-ES guest under UPM would require the guest firmware+kernel to be
> > > > properly enlightened beyond what is required architecturally.
> > > >
> > >
> > > So with guest supporting KVM_FEATURE_HC_MAP_GPA_RANGE and host (KVM) supporting
> > > KVM_HC_MAP_GPA_RANGE hypercall, SEV/SEV-ES guest should communicate private/shared
> > > pages to the hypervisor, this information can be used to mark page shared/private.
> >
> > One concern here may be that the VMM doesn't know which guests have
> > KVM_FEATURE_HC_MAP_GPA_RANGE support and which don't. Only once the
> > guest boots does the guest tell KVM that it supports
> > KVM_FEATURE_HC_MAP_GPA_RANGE. If the guest doesn't we need to pin all
> > the memory before we run the guest to be safe to be safe.
>
> Yep, that's a big reason why I view purging the existing SEV memory management as
> a long term goal. The other being that userspace obviously needs to be updated to
> support UPM[*]. I suspect the only feasible way to enable this for SEV/SEV-ES
> would be to restrict it to new VM types that have a disclaimer regarding additional
> requirements.
>
> [*] I believe Peter coined the UPM acronym for "Unmapping guest Private Memory". We've
> been using it iternally for discussion and it rolls off the tongue a lot easier than
> the full phrase, and is much more precise/descriptive than just "private fd".

Can we really "purge the existing SEV memory management"? This seems
like a non-starter because it violates userspace API (i.e., the
ability for the userspace VMM to run a guest without
KVM_FEATURE_HC_MAP_GPA_RANGE). Or maybe I'm not quite following what
you mean by purge.

Assuming that UPM-based lazy pinning comes together via a new VM type
that only supports new images based on a minimum kernel version with
KVM_FEATURE_HC_MAP_GPA_RANGE, then I think this would like as follows:

1. Userspace VMM: Check SEV VM type. If type is legacy SEV type then
do upfront pinning. Else, skip up front pinning.
2. KVM: I'm not sure anything special needs to happen here. For the
legacy VM types, it can be configured to use legacy memslots,
presumably the same as non-CVMs will be configured. For the new VM
type, it should be configured to use UPM.
3. Control plane (thing creating VMs): Responsible for not allowing
legacy SEV images (i.e., images without KVM_FEATURE_HC_MAP_GPA_RANGE)
with the new SEV VM types that use UPM and have support for demand
pinning.

Sean: Did I get this right?