Re: [PATCH 06/23] KVM: MMU: load new PGD once nested two-dimensional paging is initialized

From: David Matlack
Date: Fri Feb 04 2022 - 14:18:19 EST


On Fri, Feb 04, 2022 at 06:57:01AM -0500, Paolo Bonzini wrote:
> __kvm_mmu_new_pgd looks at the MMU's root_level and shadow_root_level
> via fast_pgd_switch.

Those checks are just for performance correct (to skip iterating through
the list of roots)?

Either way, it's probably worth including a Fixes tag below.

> It makes no sense to call it before updating
> these fields, even though it was done like that ever since nested
> VMX grew the ability to use fast CR3 switch (commit 50c28f21d045,
> "kvm: x86: Use fast CR3 switch for nested VMX").
>
> Pull it to the end of the initialization of the shadow nested MMUs.
>
> Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>

Reviewed-by: David Matlack <dmatlack@xxxxxxxxxx>

> ---
> arch/x86/kvm/mmu/mmu.c | 41 +++++++++++++++++++----------------------
> 1 file changed, 19 insertions(+), 22 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
> index 577e70509510..b8ab16323629 100644
> --- a/arch/x86/kvm/mmu/mmu.c
> +++ b/arch/x86/kvm/mmu/mmu.c
> @@ -4869,10 +4869,9 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
>
> new_role = kvm_calc_shadow_npt_root_page_role(vcpu, &regs);
>
> - __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base);
> -
> shadow_mmu_init_context(vcpu, context, &regs, new_role);
> reset_shadow_zero_bits_mask(vcpu, context, is_efer_nx(context));
> + __kvm_mmu_new_pgd(vcpu, nested_cr3, new_role.base);
> }
> EXPORT_SYMBOL_GPL(kvm_init_shadow_npt_mmu);
>
> @@ -4906,27 +4905,25 @@ void kvm_init_shadow_ept_mmu(struct kvm_vcpu *vcpu, bool execonly,
> kvm_calc_shadow_ept_root_page_role(vcpu, accessed_dirty,
> execonly, level);
>
> - __kvm_mmu_new_pgd(vcpu, new_eptp, new_role.base);
> -
> - if (new_role.as_u64 == context->mmu_role.as_u64)
> - return;
> -
> - context->mmu_role.as_u64 = new_role.as_u64;
> -
> - context->shadow_root_level = level;
> -
> - context->ept_ad = accessed_dirty;
> - context->page_fault = ept_page_fault;
> - context->gva_to_gpa = ept_gva_to_gpa;
> - context->sync_page = ept_sync_page;
> - context->invlpg = ept_invlpg;
> - context->root_level = level;
> - context->direct_map = false;
> + if (new_role.as_u64 != context->mmu_role.as_u64) {
> + context->mmu_role.as_u64 = new_role.as_u64;
> +
> + context->shadow_root_level = level;
> +
> + context->ept_ad = accessed_dirty;
> + context->page_fault = ept_page_fault;
> + context->gva_to_gpa = ept_gva_to_gpa;
> + context->sync_page = ept_sync_page;
> + context->invlpg = ept_invlpg;
> + context->root_level = level;
> + context->direct_map = false;
> + update_permission_bitmask(context, true);
> + context->pkru_mask = 0;
> + reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level);
> + reset_ept_shadow_zero_bits_mask(context, execonly);
> + }
>
> - update_permission_bitmask(context, true);
> - context->pkru_mask = 0;
> - reset_rsvds_bits_mask_ept(vcpu, context, execonly, huge_page_level);
> - reset_ept_shadow_zero_bits_mask(context, execonly);
> + __kvm_mmu_new_pgd(vcpu, new_eptp, new_role.base);
> }
> EXPORT_SYMBOL_GPL(kvm_init_shadow_ept_mmu);
>
> --
> 2.31.1
>
>