Re: [RFC V3 PATCH 5/6] selftests: kvm: x86: Execute VMs with private memory

From: Vishal Annapurve
Date: Fri Oct 14 2022 - 05:35:47 EST


On Fri, Oct 7, 2022 at 1:47 AM Sean Christopherson <seanjc@xxxxxxxxxx> wrote:
>
> On Fri, Aug 19, 2022, Vishal Annapurve wrote:
> > +/*
> > + * Execute KVM hypercall to change memory access type for a given gpa range.
> > + *
> > + * Input Args:
> > + * type - memory conversion type TO_SHARED/TO_PRIVATE
> > + * gpa - starting gpa address
> > + * size - size of the range starting from gpa for which memory access needs
> > + * to be changed
> > + *
> > + * Output Args: None
> > + *
> > + * Return: None
> > + *
> > + * Function called by guest logic in selftests to update the memory access type
> > + * for a given gpa range. This API is useful in exercising implicit conversion
> > + * path.
> > + */
> > +void guest_update_mem_access(enum mem_conversion_type type, uint64_t gpa,
> > + uint64_t size)
>
> Provide wrappers to self-document what's going on, then the massive block comments
> go away. And the guts of this and guest_update_mem_map() are nearly identical.
>
> Hmm, and we probably want to make it possible to do negative testing.
>
> Then the one-off enums for TO_PRIVATE and whatnot go way too.
>
> > +{
> > + int ret = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> MIN_PAGE_SHIFT,
>
> Needs an assert that @size is page aligned. And since these are x86-64 specific,
> just use PAGE_SHIFT. Huh, IS_ALIGNED() doesn't exist in selftests. That should
> be added, either by pulling in align.h or by adding the generic macros to
> kvm_util_base.h.
>
> And then x86-64's processor.h can defined IS_PAGE_ALIGNED().
>
> E.g.
>
> static inline void __kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size,
> uint64_t flags)
> {
> return = kvm_hypercall(KVM_HC_MAP_GPA_RANGE, gpa, size >> PAGE_SHIFT, flags, 0);
> }
>
> static inline void kvm_hypercall_map_gpa_range(uint64_t gpa, uint64_t size,
> uint64_t flags)
> {
> int ret;
>
> GUEST_ASSERT_2(IS_PAGE_ALIGNED(gpa) && IS_PAGE_ALIGNED(size), gpa, size);
>
> ret = __kvm_hypercall_map_gpa_range(gpa, size, flags);
> GUEST_ASSERT_1(!ret, ret);
> }
>
> static inline kvm_hypercall_map_shared(uint64_t gpa, uint64_t size)
> {
> kvm_hypercall_map_gpa_range(gpa, size, KVM_CLR_GPA_RANGE_ENC_ACCESS);
> }
>
> static inline kvm_hypercall_map_private(uint64_t gpa, uint64_t size)
> {
> kvm_hypercall_map_gpa_range(gpa, size, KVM_MARK_GPA_RANGE_ENC_ACCESS);
> }
>
> > +static void handle_vm_exit_map_gpa_hypercall(struct kvm_vm *vm,
> > + volatile struct kvm_run *run)
>
> Pass in @vcpu, not a vm+run.
>
> > +{
> > + uint64_t gpa, npages, attrs, size;
> > +
> > + TEST_ASSERT(run->hypercall.nr == KVM_HC_MAP_GPA_RANGE,
> > + "Unhandled Hypercall %lld\n", run->hypercall.nr);
> > + gpa = run->hypercall.args[0];
> > + npages = run->hypercall.args[1];
> > + size = npages << MIN_PAGE_SHIFT;
> > + attrs = run->hypercall.args[2];
> > + pr_info("Explicit conversion off 0x%lx size 0x%lx to %s\n", gpa, size,
> > + (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED) ? "private" : "shared");
> > +
> > + if (attrs & KVM_MAP_GPA_RANGE_ENCRYPTED)
> > + vm_update_private_mem(vm, gpa, size, ALLOCATE_MEM);
> > + else
> > + vm_update_private_mem(vm, gpa, size, UNBACK_MEM);
> > +
> > + run->hypercall.ret = 0;
> > +}
> > +
> > +static void handle_vm_exit_memory_error(struct kvm_vm *vm, volatile struct kvm_run *run)
>
> Same here, take a @vcpu.

Ack. Will address these comments in the next series.