Re: [PATCH v1 2/5] mm/rmap: introduce and use hugetlb_remove_rmap()

From: David Hildenbrand
Date: Tue Nov 28 2023 - 11:39:45 EST


On 28.11.23 17:08, Peter Xu wrote:
On Tue, Nov 28, 2023 at 03:52:02PM +0100, David Hildenbrand wrote:
hugetlb rmap handling differs quite a lot from "ordinary" rmap code. We
don't want this hugetlb special-casing in the rmap functions, as
we're special casing the callers already. Let's simply use a separate
function for hugetlb.

We were special casing some, not all..

And IIUC the suggestion from the community is we reduce that "special
casing", instead of adding more? To be explicit below..

Quoting from the cover letter:

"We have hugetlb special-casing/checks in the callers in all cases either way already in place: it doesn't make too much sense to call generic-looking functions that end up doing hugetlb specific things from hugetlb special-cases."

[...]

+++ b/mm/rmap.c
@@ -1440,13 +1440,6 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
VM_BUG_ON_PAGE(compound && !PageHead(page), page);
- /* Hugetlb pages are not counted in NR_*MAPPED */
- if (unlikely(folio_test_hugetlb(folio))) {
- /* hugetlb pages are always mapped with pmds */
- atomic_dec(&folio->_entire_mapcount);
- return;
- }

Fundamentally in the ideal world when hugetlb can be merged into generic
mm, I'd imagine that as dropping here, meanwhile...


Quoting from the cover letter:

"If ever something about hugetlb changes that makes them actually
partially-mappable folios, we can look into cleanly merging all code
paths, not just some."

-
/* Is page being unmapped by PTE? Is this its last map to be removed? */
if (likely(!compound)) {
last = atomic_add_negative(-1, &page->_mapcount);
@@ -1804,7 +1797,10 @@ static bool try_to_unmap_one(struct folio *folio, struct vm_area_struct *vma,
dec_mm_counter(mm, mm_counter_file(&folio->page));
}
discard:
- page_remove_rmap(subpage, vma, folio_test_hugetlb(folio));
+ if (unlikely(folio_test_hugetlb(folio)))
+ hugetlb_remove_rmap(folio);
+ else
+ page_remove_rmap(subpage, vma, false);

... rather than moving hugetlb_* handlings even upper the stack, we should
hopefully be able to keep this one as a generic api.

I worry this patch is adding more hugetlb-specific paths even onto higher
call stacks so it's harder to generalize, going the other way round to what
we wanted per previous discussions.

Said that, indeed I read mostly nothing yet with the recent rmap patches,
so I may miss some important goal here.

Quoting from the cover letter:

"This is a stanalone cleanup, but it gets more relevant when adding more
rmap batching (we cannot map parts of a hugetlb folio) or changing the way we handle partially-mappable folios as in [1], whereby we'd have to add more hugetlb special casing to keep hugetlb working as is."


Thanks for the review. If you have strong feelings, please add an explicit nack. Otherwise, I'll move forward with this.

--
Cheers,

David / dhildenb