RE: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl

From: Kalra, Ashish
Date: Thu Feb 18 2021 - 02:26:02 EST


[AMD Public Use]

-----Original Message-----
From: Sean Christopherson <seanjc@xxxxxxxxxx>
Sent: Wednesday, February 17, 2021 10:13 AM
To: Kalra, Ashish <Ashish.Kalra@xxxxxxx>
Cc: pbonzini@xxxxxxxxxx; tglx@xxxxxxxxxxxxx; mingo@xxxxxxxxxx; hpa@xxxxxxxxx; rkrcmar@xxxxxxxxxx; joro@xxxxxxxxxx; bp@xxxxxxx; Lendacky, Thomas <Thomas.Lendacky@xxxxxxx>; x86@xxxxxxxxxx; kvm@xxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; srutherford@xxxxxxxxxx; venu.busireddy@xxxxxxxxxx; Singh, Brijesh <brijesh.singh@xxxxxxx>
Subject: Re: [PATCH v10 10/16] KVM: x86: Introduce KVM_GET_SHARED_PAGES_LIST ioctl

On Wed, Feb 17, 2021, Kalra, Ashish wrote:
>> From: Sean Christopherson <seanjc@xxxxxxxxxx> On Thu, Feb 04, 2021,
>> Ashish Kalra wrote:
>> > From: Brijesh Singh <brijesh.singh@xxxxxxx>
>> >
>> > The ioctl is used to retrieve a guest's shared pages list.
>>
>> >What's the performance hit to boot time if KVM_HC_PAGE_ENC_STATUS is
>> >passed through to userspace? That way, userspace could manage the
>> >set of pages >in whatever data structure they want, and these get/set ioctls go away.
>>
>> What is the advantage of passing KVM_HC_PAGE_ENC_STATUS through to
>> user-space ?
>>
>> As such it is just a simple interface to get the shared page list via
>> the get/set ioctl's. simply an array is passed to these ioctl to
>> get/set the shared pages list.

> It eliminates any probability of the kernel choosing the wrong data structure, and it's two fewer ioctls to maintain and test.

The set shared pages list ioctl cannot be avoided as it needs to be issued to setup the shared pages list on the migrated
VM, it cannot be achieved by passing KVM_HC_PAGE_ENC_STATUS through to user-space.

So it makes sense to add both get/set shared pages list ioctl, passing through to user-space is just adding more complexity
without any significant gains.

> >Also, aren't there plans for an in-guest migration helper? If so, do
> >we have any idea what that interface will look like? E.g. if we're
> >going to end up with a full >fledged driver in the guest, why not
> >bite the bullet now and bypass KVM entirely?
>
> Even the in-guest migration helper will be using page encryption
> status hypercalls, so some interface is surely required.

>If it's a driver with a more extensive interace, then the hypercalls can be replaced by a driver operation. That's obviously a big if, though.

> Also the in-guest migration will be mainly an OVMF component, won't
> really be a full fledged kernel driver in the guest.

>Is there code and/or a description of what the proposed helper would look like?

Not right now, there are prototype(s) under development, I assume they will be posted upstream soon.

Thanks,
Ashish