Re: [PATCH] KVM: x86/mmu: Do not create SPTEs for GFNs that exceed host.MAXPHYADDR

From: Paolo Bonzini
Date: Fri Apr 29 2022 - 10:50:17 EST


On 4/29/22 16:42, Sean Christopherson wrote:
On Fri, Apr 29, 2022, Paolo Bonzini wrote:
On 4/29/22 16:24, Sean Christopherson wrote:
I don't love the divergent memslot behavior, but it's technically correct, so I
can't really argue. Do we want to "officially" document the memslot behavior?


I don't know what you mean by officially document,

Something in kvm/api.rst under KVM_SET_USER_MEMORY_REGION.

Not sure if the API documentation is the best place because userspace does not know whether shadow paging is on (except indirectly through other capabilities, perhaps)?

It could even be programmatic, such as returning 52 for CPUID[0x80000008]. A nested KVM on L1 would not be able to use the #PF(RSVD) trick to detect MMIO faults. That's not a big price to pay, however I'm not sure it's a good idea in general...

Paolo


but at least I have relied on it to test KVM's MAXPHYADDR=52 cases before
such hardware existed. :)

Ah, that's a very good reason to support this for shadow paging. Maybe throw
something about testing in the changelog? Without considering the testing angle,
it looks like KVM supports max=52 for !TDP just because it can, because practically
speaking there's unlikely to be a use case for exposing that much memory to a
guest when using shadow paging.