Re: [PATCH 2/3] KVM: arm64: Don't map PUD huge page if it's not available

From: Gavin Shan
Date: Sun Oct 25 2020 - 18:27:51 EST


Hi Marc,

On 10/25/20 9:05 PM, Marc Zyngier wrote:
On Sun, 25 Oct 2020 01:27:38 +0100,
Gavin Shan <gshan@xxxxxxxxxx> wrote:

PUD huge page isn't available when CONFIG_ARM64_4K_PAGES is disabled.
In this case, we needn't try to map the memory through PUD huge pages
to save some CPU cycles in the hot path.

This also corrects the code style issue, which was introduced by
commit <523b3999e5f6> ("KVM: arm64: Try PMD block mappings if PUD mappings
are not supported").

Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>
---
arch/arm64/kvm/mmu.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index a816cb8e619b..0f51585adc04 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -787,9 +787,11 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
vma_shift = PAGE_SHIFT;
}
+#ifdef CONFIG_ARM64_4K_PAGES
if (vma_shift == PUD_SHIFT &&
!fault_supports_stage2_huge_mapping(memslot, hva, PUD_SIZE))
- vma_shift = PMD_SHIFT;
+ vma_shift = PMD_SHIFT;
+#endif
if (vma_shift == PMD_SHIFT &&
!fault_supports_stage2_huge_mapping(memslot, hva, PMD_SIZE)) {


I really don't buy the "CPU cycles" argument here either. Can you
actually measure any difference here?

You have taken a fault, gone through a full guest exit, triaged it,
and are about to into a page mapping operation which may result in a
TLBI, and reenter the guest. It only happen a handful of times per
page over the lifetime of the guest unless you start swapping. Hot
path? I don't think so.


Thanks for the explanation. Agreed and I will drop this in v2.

Thanks,
Gavin