Re: [PATCH] x86/sev: Make early_set_memory_decrypted() calls page aligned

From: Steve Rutherford
Date: Mon Aug 21 2023 - 14:16:04 EST


On Mon, Aug 21, 2023 at 6:10 AM Tom Lendacky <thomas.lendacky@xxxxxxx> wrote:
>
> On 8/18/23 18:34, Steve Rutherford wrote:
> > early_set_memory_decrypted() assumes its parameters are page aligned.
> > Non-page aligned calls result in additional pages being marked as
> > decrypted via the encryption status hypercall, which results in
> > consistent corruption of pages during live migration. Live
> > migration requires accurate encryption status information to avoid
> > migrating pages from the wrong perspective.
>
> Hmmm... I'm not sure this is the proper fix. The code is actually doing
> the right thing from a encyrption/decryption point of view by checking the
> c-bit for the PTE associated with the virtual address and the size
> (possibly crossing page boundaries).
>
> I think the problem is on the call to early_set_mem_enc_dec_hypercall()
> where it doesn't take into account the possible crossing of page
> boundaries and so can under-count the number of pages, right?

Right now, if you request decryption of e.g. a non-page aligned 0x40
byte structure, it rounds the 0x40 bytes up to one page, and then
hypercalls to mark both the page it's on and the subsequent page as
decrypted (since the rounding stretches the structure onto the next
page spuriously). The arithmetic in the combination of
early_set_memory_enc_dec() and early_set_mem_enc_dec_hypercall() are
correct if they are called with page aligned vaddrs (non-page-aligned
sizes are fine iiuc).

Thanks,
Steve
>
> Thanks,
> Tom
>
> >
> > Fixes: 4716276184ec ("X86/KVM: Decrypt shared per-cpu variables when SEV is active")
> > Signed-off-by: Steve Rutherford <srutherford@xxxxxxxxxx>
> > ---
> > arch/x86/kernel/kvm.c | 14 +++++++++++++-
> > 1 file changed, 13 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> > index 6a36db4f79fd..a0c072d3103c 100644
> > --- a/arch/x86/kernel/kvm.c
> > +++ b/arch/x86/kernel/kvm.c
> > @@ -419,7 +419,14 @@ static u64 kvm_steal_clock(int cpu)
> >
> > static inline void __set_percpu_decrypted(void *ptr, unsigned long size)
> > {
> > - early_set_memory_decrypted((unsigned long) ptr, size);
> > + /*
> > + * early_set_memory_decrypted() requires page aligned parameters, but
> > + * this function needs to handle ptrs offset into a page.
> > + */
> > + unsigned long start = PAGE_ALIGN_DOWN((unsigned long) ptr);
> > + unsigned long end = (unsigned long) ptr + size;
> > +
> > + early_set_memory_decrypted(start, end - start);
> > }
> >
> > /*
> > @@ -438,6 +445,11 @@ static void __init sev_map_percpu_data(void)
> > return;
> >
> > for_each_possible_cpu(cpu) {
> > + /*
> > + * Calling __set_percpu_decrypted() for each per-cpu variable is
> > + * inefficent, since it may decrypt the same page multiple times.
> > + * That said, it avoids the need for more complicated logic.
> > + */
> > __set_percpu_decrypted(&per_cpu(apf_reason, cpu), sizeof(apf_reason));
> > __set_percpu_decrypted(&per_cpu(steal_time, cpu), sizeof(steal_time));
> > __set_percpu_decrypted(&per_cpu(kvm_apic_eoi, cpu), sizeof(kvm_apic_eoi));