Re: [PATCH v6] mm/filemap: remove hugetlb special casing in filemap.c

From: Mike Kravetz
Date: Mon Aug 21 2023 - 14:34:28 EST


On 08/17/23 11:18, Sidhartha Kumar wrote:
> Remove special cased hugetlb handling code within the page cache by
> changing the granularity of each index to the base page size rather than
> the huge page size. Adds new wrappers for hugetlb code to to interact with the
> page cache which convert to a linear index.
<snip>
> @@ -237,7 +234,7 @@ void filemap_free_folio(struct address_space *mapping, struct folio *folio)
> if (free_folio)
> free_folio(folio);
>
> - if (folio_test_large(folio) && !folio_test_hugetlb(folio))
> + if (folio_test_large(folio))
> refs = folio_nr_pages(folio);
> folio_put_refs(folio, refs);
> }
> @@ -858,14 +855,15 @@ noinline int __filemap_add_folio(struct address_space *mapping,
>
> if (!huge) {
> int error = mem_cgroup_charge(folio, NULL, gfp);
> - VM_BUG_ON_FOLIO(index & (folio_nr_pages(folio) - 1), folio);
> if (error)
> return error;
> charged = true;
> - xas_set_order(&xas, index, folio_order(folio));
> - nr = folio_nr_pages(folio);
> }

When a hugetlb page is added to the page cache, the ref count will now
be increased by folio_nr_pages. So, the ref count for a 2MB hugetlb page
on x86 will be increased by 512.

We will need a corresponding change to migrate_huge_page_move_mapping().
For migration, the ref count is checked as follows:

xas_lock_irq(&xas);
expected_count = 2 + folio_has_private(src);
if (!folio_ref_freeze(src, expected_count)) {
xas_unlock_irq(&xas);
return -EAGAIN;
}

So, this patch will break hugetlb migration of hugetlb pages in the page
cache.

Sorry for not noticing this earlier.
--
Mike Kravetz