[PATCH v2 2/8] x86/mm: Don't do a TLB flush if changing a PTE that isn't marked present

From: mhkelley58
Date: Tue Nov 21 2023 - 16:20:41 EST


From: Michael Kelley <mhklinux@xxxxxxxxxxx>

The core function __change_page_attr() currently sets up a TLB flush if
a PTE is changed. But if the old value of the PTE doesn't include the
PRESENT flag, the PTE won't be in the TLB, so a flush isn't needed.

Avoid an unnecessary TLB flush by conditioning the flush on the old
PTE value including PRESENT. This change improves the performance of
functions like set_memory_p() by avoiding the flush if the memory range
was previously all not present.

Signed-off-by: Michael Kelley <mhklinux@xxxxxxxxxxx>
---
arch/x86/mm/pat/set_memory.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 8e19796e7ce5..d7ef8d312a47 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1636,7 +1636,10 @@ static int __change_page_attr(struct cpa_data *cpa, int primary)
*/
if (pte_val(old_pte) != pte_val(new_pte)) {
set_pte_atomic(kpte, new_pte);
- cpa->flags |= CPA_FLUSHTLB;
+
+ /* If old_pte isn't present, it's not in the TLB */
+ if (pte_present(old_pte))
+ cpa->flags |= CPA_FLUSHTLB;
}
cpa->numpages = 1;
return 0;
--
2.25.1