[RESEND PATCH] mm: align larger anonymous mappings on THP boundaries

From: Yang Shi
Date: Thu Dec 14 2023 - 17:35:04 EST


From: Rik van Riel <riel@xxxxxxxxxxx>

Align larger anonymous memory mappings on THP boundaries by going through
thp_get_unmapped_area if THPs are enabled for the current process.

With this patch, larger anonymous mappings are now THP aligned. When a
malloc library allocates a 2MB or larger arena, that arena can now be
mapped with THPs right from the start, which can result in better TLB hit
rates and execution time.

Link: https://lkml.kernel.org/r/20220809142457.4751229f@xxxxxxxxxxxxxxxxxxxx
Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx>
Reviewed-by: Yang Shi <shy828301@xxxxxxxxx>
Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx>
Cc: Christopher Lameter <cl@xxxxxxxxx>
Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
---
This patch was applied to v6.1, but was reverted due to a regression
report. However it turned out the regression was not due to this patch.
I ping'ed Andrew to reapply this patch, Andrew may forget it. This
patch helps promote THP, so I rebased it onto the latest mm-unstable.


mm/mmap.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index 9d780f415be3..dd25a2aa94f7 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2232,6 +2232,9 @@ get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
*/
pgoff = 0;
get_area = shmem_get_unmapped_area;
+ } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+ /* Ensures that larger anonymous mappings are THP aligned. */
+ get_area = thp_get_unmapped_area;
}

addr = get_area(file, addr, len, pgoff, flags);
--
2.41.0