Re: [PATCH v2 28/40] mm/memory: page_remove_rmap() -> folio_remove_rmap_pte()

From: David Hildenbrand
Date: Mon Jan 22 2024 - 12:51:54 EST


On 22.01.24 17:58, Ryan Roberts wrote:
On 20/12/2023 22:44, David Hildenbrand wrote:
Let's convert zap_pte_range() and closely-related
tlb_flush_rmap_batch(). While at it, perform some more folio conversion
in zap_pte_range().

Signed-off-by: David Hildenbrand <david@xxxxxxxxxx>
---
mm/memory.c | 23 +++++++++++++----------
mm/mmu_gather.c | 2 +-
2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 6552ea27b0bfa..eda2181275d9b 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1434,6 +1434,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
arch_enter_lazy_mmu_mode();
do {
pte_t ptent = ptep_get(pte);
+ struct folio *folio;
struct page *page;
if (pte_none(ptent))
@@ -1459,21 +1460,22 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
continue;
}
+ folio = page_folio(page);
delay_rmap = 0;
- if (!PageAnon(page)) {
+ if (!folio_test_anon(folio)) {
if (pte_dirty(ptent)) {
- set_page_dirty(page);
+ folio_set_dirty(folio);

Is this foliation change definitely correct? I note that set_page_dirty() is
defined as this: >
bool set_page_dirty(struct page *page)
{
return folio_mark_dirty(page_folio(page));
}

And folio_mark_dirty() is doing more than just setting teh PG_dirty bit. In my
equivalent change, as part of the contpte series, I've swapped set_page_dirty()
for folio_mark_dirty().

Good catch, that should be folio_mark_dirty(). Let me send a fixup.

(the difference in naming for both functions really is bad)


--
Cheers,

David / dhildenb