[PATCH 5.15 082/207] KVM: X86: Use vcpu->arch.walk_mmu for kvm_mmu_invlpg()

From: Greg Kroah-Hartman
Date: Mon Dec 06 2021 - 10:40:52 EST


From: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>

commit 05b29633c7a956d5675f5fbba70db0d26aa5e73e upstream.

INVLPG operates on guest virtual address, which are represented by
vcpu->arch.walk_mmu. In nested virtualization scenarios,
kvm_mmu_invlpg() was using the wrong MMU structure; if L2's invlpg were
emulated by L0 (in practice, it hardly happen) when nested two-dimensional
paging is enabled, the call to ->tlb_flush_gva() would be skipped and
the hardware TLB entry would not be invalidated.

Signed-off-by: Lai Jiangshan <laijs@xxxxxxxxxxxxxxxxx>
Message-Id: <20211124122055.64424-5-jiangshanlai@xxxxxxxxx>
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Paolo Bonzini <pbonzini@xxxxxxxxxx>
Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx>
---
arch/x86/kvm/mmu/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -5369,7 +5369,7 @@ void kvm_mmu_invalidate_gva(struct kvm_v

void kvm_mmu_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
{
- kvm_mmu_invalidate_gva(vcpu, vcpu->arch.mmu, gva, INVALID_PAGE);
+ kvm_mmu_invalidate_gva(vcpu, vcpu->arch.walk_mmu, gva, INVALID_PAGE);
++vcpu->stat.invlpg;
}
EXPORT_SYMBOL_GPL(kvm_mmu_invlpg);