[PATCH 1/2] KVM: x86/mmu: Remove dead code in .change_pte() handler in x86 TDP MMU

From: Yan Zhao
Date: Tue Aug 08 2023 - 11:56:00 EST


Remove the dead code set_spte_gfn() in x86 TDP MMU's .change_pte() handler
to save CPU cycles and to prepare for the optimization in next patch.

As explained in commit c13fda237f08 ("KVM: Assert that notifier count is
elevated in .change_pte()"),

when .change_pte() was added by commit 828502d30073 ("ksm: add mmu_notifier
set_pte_at_notify()"), .change_pte() was invoked without any surrounding
notifications;

However, since commit 6bdb913f0a70 ("mm: wrap calls to set_pte_at_notify
with invalidate_range_start and invalidate_range_end"), all calls to
.change_pte() are guaranteed to be surrounded by .invalidate_range_start()
and .invalidate_range_end() pair.

As .invalidate_range_start() will always cause KVM to zap related SPTE, and
page fault path will not install new SPTEs successfully before
.invalidate_range_end(), kvm_set_spte_gfn() should not be able to find any
shadow present leaf entries to operate on and therefore set_spte_gfn()
is never called any more.

So, in TDP MMU, just drop the set_spte_gfn() and only keep warning of huge
pages.

Signed-off-by: Yan Zhao <yan.y.zhao@xxxxxxxxx>
---
arch/x86/kvm/mmu/tdp_mmu.c | 40 ++++----------------------------------
1 file changed, 4 insertions(+), 36 deletions(-)

diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
index 6250bd3d20c1..89a1f222e823 100644
--- a/arch/x86/kvm/mmu/tdp_mmu.c
+++ b/arch/x86/kvm/mmu/tdp_mmu.c
@@ -1235,36 +1235,6 @@ bool kvm_tdp_mmu_test_age_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
return kvm_tdp_mmu_handle_gfn(kvm, range, test_age_gfn);
}

-static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
- struct kvm_gfn_range *range)
-{
- u64 new_spte;
-
- /* Huge pages aren't expected to be modified without first being zapped. */
- WARN_ON(pte_huge(range->arg.pte) || range->start + 1 != range->end);
-
- if (iter->level != PG_LEVEL_4K ||
- !is_shadow_present_pte(iter->old_spte))
- return false;
-
- /*
- * Note, when changing a read-only SPTE, it's not strictly necessary to
- * zero the SPTE before setting the new PFN, but doing so preserves the
- * invariant that the PFN of a present * leaf SPTE can never change.
- * See handle_changed_spte().
- */
- tdp_mmu_iter_set_spte(kvm, iter, 0);
-
- if (!pte_write(range->arg.pte)) {
- new_spte = kvm_mmu_changed_pte_notifier_make_spte(iter->old_spte,
- pte_pfn(range->arg.pte));
-
- tdp_mmu_iter_set_spte(kvm, iter, new_spte);
- }
-
- return true;
-}
-
/*
* Handle the changed_pte MMU notifier for the TDP MMU.
* data is a pointer to the new pte_t mapping the HVA specified by the MMU
@@ -1273,12 +1243,10 @@ static bool set_spte_gfn(struct kvm *kvm, struct tdp_iter *iter,
*/
bool kvm_tdp_mmu_set_spte_gfn(struct kvm *kvm, struct kvm_gfn_range *range)
{
- /*
- * No need to handle the remote TLB flush under RCU protection, the
- * target SPTE _must_ be a leaf SPTE, i.e. cannot result in freeing a
- * shadow page. See the WARN on pfn_changed in handle_changed_spte().
- */
- return kvm_tdp_mmu_handle_gfn(kvm, range, set_spte_gfn);
+ /* Huge pages aren't expected to be modified */
+ WARN_ON(pte_huge(range->arg.pte) || range->start + 1 != range->end);
+
+ return false;
}

/*
--
2.17.1