Re: [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap()

From: David Hildenbrand
Date: Thu Jan 18 2024 - 06:54:30 EST


On 18.01.24 12:10, Barry Song wrote:
From: Barry Song <v-songbaohua@xxxxxxxx>

In do_swap_page(), while supporting large folio swap-in, we are using the helper
folio_add_anon_rmap_ptes. This is triggerring a WARN_ON in __folio_add_anon_rmap.
We can make the warning quiet by two ways
1. in do_swap_page, we call folio_add_new_anon_rmap() if we are sure the large
folio is new allocated one; we call folio_add_anon_rmap_ptes() if we find the
large folio in swapcache.
2. we always call folio_add_anon_rmap_ptes() in do_swap_page but weaken the
WARN_ON in __folio_add_anon_rmap() by letting the WARN_ON less sensitive.

Option 2 seems to be better for do_swap_page() as it can use unified code for
all cases.

Signed-off-by: Barry Song <v-songbaohua@xxxxxxxx>
Tested-by: Chuanhua Han <hanchuanhua@xxxxxxxx>
---
mm/rmap.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index f5d43edad529..469fcfd32317 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1304,7 +1304,10 @@ static __always_inline void __folio_add_anon_rmap(struct folio *folio,
* page.
*/
VM_WARN_ON_FOLIO(folio_test_large(folio) &&
- level != RMAP_LEVEL_PMD, folio);
+ level != RMAP_LEVEL_PMD &&
+ (!IS_ALIGNED(address, nr_pages * PAGE_SIZE) ||
+ (folio_test_swapcache(folio) && !IS_ALIGNED(folio->index, nr_pages)) ||
+ page != &folio->page), folio);
__folio_set_anon(folio, vma, address,
!!(flags & RMAP_EXCLUSIVE));
} else if (likely(!folio_test_ksm(folio))) {


I have on my todo list to move all that !anon handling out of folio_add_anon_rmap_ptes(), and instead make swapin code call add folio_add_new_anon_rmap(), where we'll have to pass an exclusive flag then (-> whole new folio exclusive).

That's the cleaner approach.

--
Cheers,

David / dhildenb