[PATCH 2/5] x86/mm: Don't do a TLB flush if changing a PTE that isn't marked present

From: Michael Kelley
Date: Fri Sep 29 2023 - 14:20:38 EST


The core function __change_page_attr() currently sets up a TLB flush if
a PTE is changed. But if the old value of the PTE doesn't include the
PRESENT flag, the PTE won't be in the TLB, so a flush isn't needed.

Avoid an unnecessary TLB flush by conditioning the flush on the old
PTE value including PRESENT. This change improves the performance of
functions like set_memory_p() by avoiding the flush if the memory range
was previously all not present.

Signed-off-by: Michael Kelley <mikelley@xxxxxxxxxxxxx>
---
arch/x86/mm/pat/set_memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 8e19796..d7ef8d3 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1636,7 +1636,10 @@ static int __change_page_attr(struct cpa_data *cpa, int primary)
*/
if (pte_val(old_pte) != pte_val(new_pte)) {
set_pte_atomic(kpte, new_pte);
- cpa->flags |= CPA_FLUSHTLB;
+
+ /* If old_pte isn't present, it's not in the TLB */
+ if (pte_present(old_pte))
+ cpa->flags |= CPA_FLUSHTLB;
}
cpa->numpages = 1;
return 0;
--
1.8.3.1