[PATCH v2 00/13] mm/gup: Unify hugetlb, part 2

From: peterx
Date: Wed Jan 03 2024 - 04:15:06 EST


From: Peter Xu <peterx@xxxxxxxxxx>

v2:
- Collect acks
- Patch 9:
- Use READ_ONCE() to fetch pud entry [James]

rfc: https://lore.kernel.org/r/20231116012908.392077-1-peterx@xxxxxxxxxx
v1: https://lore.kernel.org/r/20231219075538.414708-1-peterx@xxxxxxxxxx

This is v2 of the series, based on latest mm-unstalbe (856325d361df).

The series removes the hugetlb slow gup path after a previous refactor work
[1], so that slow gup now uses the exact same path to process all kinds of
memory including hugetlb.

For the long term, we may want to remove most, if not all, call sites of
huge_pte_offset(). It'll be ideal if that API can be completely dropped
from arch hugetlb API. This series is one small step towards merging
hugetlb specific codes into generic mm paths. From that POV, this series
removes one reference to huge_pte_offset() out of many others.

One goal of such a route is that we can reconsider merging hugetlb features
like High Granularity Mapping (HGM). It was not accepted in the past
because it may add lots of hugetlb specific codes and make the mm code even
harder to maintain. With a merged codeset, features like HGM can hopefully
share some code with THP, legacy (PMD+) or modern (continuous PTEs).

To make it work, the generic slow gup code will need to at least understand
hugepd, which is already done like so in fast-gup. Fortunately it seems
that's the only major thing I need to teach slow GUP to share the common
path for now besides normal huge PxD entries. Non-gup can be more
challenging, but that's a question for later.

There's one major difference for slow-gup on cont_pte / cont_pmd handling,
currently supported on three architectures (aarch64, riscv, ppc). Before
the series, slow gup will be able to recognize e.g. cont_pte entries with
the help of huge_pte_offset() when hstate is around. Now it's gone but
still working, by looking up pgtable entries one by one.

It's not ideal, but hopefully this change should not affect yet on major
workloads. There's some more information in the commit message of the last
patch. If this would be a concern, we can consider teaching slow gup to
recognize cont pte/pmd entries, and that should recover the lost
performance. But I doubt its necessity for now, so I kept it as simple as
it can be.

Test Done
=========

This v1 went through the normal GUP smoke tests over different memory
types on archs (using VM instances): x86_64, aarch64, ppc64le. For
aarch64, tested over 64KB cont_pte huge pages. For ppc64le, tested over
16MB hugepd entries (Power8 hash MMU on 4K base page size).

Patch layout
=============

Patch 1-7: Preparation works, or cleanups in relevant code paths
Patch 8-12: Teach slow gup with all kinds of huge entries (pXd, hugepd)
Patch 13: Drop hugetlb_follow_page_mask()

More information can be found in the commit messages of each patch. Any
comment will be welcomed. Thanks.

[1] https://lore.kernel.org/all/20230628215310.73782-1-peterx@xxxxxxxxxx

Peter Xu (13):
mm/Kconfig: CONFIG_PGTABLE_HAS_HUGE_LEAVES
mm/hugetlb: Declare hugetlbfs_pagecache_present() non-static
mm: Provide generic pmd_thp_or_huge()
mm: Make HPAGE_PXD_* macros even if !THP
mm: Introduce vma_pgtable_walk_{begin|end}()
mm/gup: Drop folio_fast_pin_allowed() in hugepd processing
mm/gup: Refactor record_subpages() to find 1st small page
mm/gup: Handle hugetlb for no_page_table()
mm/gup: Cache *pudp in follow_pud_mask()
mm/gup: Handle huge pud for follow_pud_mask()
mm/gup: Handle huge pmd for follow_pmd_mask()
mm/gup: Handle hugepd for follow_page()
mm/gup: Handle hugetlb in the generic follow_page_mask code

include/linux/huge_mm.h | 25 +--
include/linux/hugetlb.h | 16 +-
include/linux/mm.h | 3 +
include/linux/pgtable.h | 4 +
mm/Kconfig | 3 +
mm/gup.c | 362 ++++++++++++++++++++++++++++++++--------
mm/huge_memory.c | 133 +--------------
mm/hugetlb.c | 75 +--------
mm/internal.h | 7 +-
mm/memory.c | 12 ++
10 files changed, 342 insertions(+), 298 deletions(-)

--
2.41.0