Re: [PATCH 3/3] KVM: arm64: Failback on unsupported huge page sizes

From: Gavin Shan
Date: Sun Oct 25 2020 - 19:04:58 EST


Hi Marc,

On 10/25/20 9:48 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:39 +0100,
Gavin Shan <gshan@xxxxxxxxxx> wrote:

The huge page could be mapped through multiple contiguous PMDs or PTEs.
The corresponding huge page sizes aren't supported by the page table
walker currently.

This fails the unsupported huge page sizes to the near one. Otherwise,
the guest can't boot successfully: CONT_PMD_SHIFT and CONT_PTE_SHIFT
fail back to PMD_SHIFT and PAGE_SHIFT separately.

Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 8 ++++++++
1 file changed, 8 insertions(+)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index 0f51585adc04..81cbdc368246 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -793,12 +793,20 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_shift = PMD_SHIFT;
#endif
+ if (vma_shift == CONT_PMD_SHIFT)
+ vma_shift = PMD_SHIFT;
+
if (vma_shift == PMD_SHIFT &&
!fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
force_pte = true;
vma_shift = PAGE_SHIFT;
}
+ if (vma_shift == CONT_PTE_SHIFT) {
+ force_pte = true;
+ vma_shift = PAGE_SHIFT;
+ }
+
vma_pagesize = 1UL << vma_shift;
if (vma_pagesize == PMD_SIZE || vma_pagesize == PUD_SIZE)
fault_ipa &= ~(vma_pagesize - 1);

Yup, nice catch. However, I think we should take this opportunity to
rationalise the logic here, and catch future discrepancies (should
someone add contiguous PUD or something similarly silly). How about
something like this (untested):


Yeah, I started the work to support contiguous PMDs/PTEs, but I'm not
sure when I can post the patches for review as my time becomes a bit
fragmented recently. At least, I need focus on "async page fault" in
the coming weeks :)

Thanks for the suggested code and it worked for me. I'll post v2 to
integrate them. However, I would like to drop PATCH[1] and PATCH[2]
as I really don't have strong reasons to have them.

Thanks,
Gavin

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index cc323d96c9d4..d9a13a8a82e0 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -787,14 +787,31 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_shift = PAGE_SHIFT;
}
- if (vma_shift == PUD_SHIFT &&
- !fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
- vma_shift = PMD_SHIFT;
+ switch (vma_shift) {
+ case PUD_SHIFT:
+ if (fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
+ break;
+ fallthrough;
- if (vma_shift == PMD_SHIFT &&
- !fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {
- force_pte = true;
+ case CONT_PMD_SHIFT:
+ vma_shift = PMD_SHIFT;
+ fallthrough;
+
+ case PMD_SHIFT:
+ if (fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE))
+ break;
+ fallthrough;
+
+ case CONT_PTE_SHIFT:
vma_shift = PAGE_SHIFT;
+ force_pte = true;
+ fallthrough;
+
+ case PAGE_SHIFT:
+ break;
+
+ default:
+ WARN_ONCE(1, "Unknown vma_shift %d", vma_shift);
}
vma_pagesize = 1UL << vma_shift;