Re: [PATCH v1 1/4] mm/compaction: enable compacting >0 order folios.

From: Baolin Wang
Date: Mon Nov 20 2023 - 04:18:26 EST




On 11/14/2023 1:01 AM, Zi Yan wrote:
From: Zi Yan <ziy@xxxxxxxxxx>

migrate_pages() supports >0 order folio migration and during compaction,
even if compaction_alloc() cannot provide >0 order free pages,
migrate_pages() can split the source page and try to migrate the base pages
from the split. It can be a baseline and start point for adding support for
compacting >0 order folios.

Suggested-by: Huang Ying <ying.huang@xxxxxxxxx>
Signed-off-by: Zi Yan <ziy@xxxxxxxxxx>
---
mm/compaction.c | 57 ++++++++++++++++++++++++++++++++++++-------------
1 file changed, 42 insertions(+), 15 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 01ba298739dd..5217dd35b493 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -816,6 +816,21 @@ static bool too_many_isolated(struct compact_control *cc)
return too_many;
}
+/*
+ * 1. if the page order is larger than or equal to target_order (i.e.,
+ * cc->order and when it is not -1 for global compaction), skip it since
+ * target_order already indicates no free page with larger than target_order
+ * exists and later migrating it will most likely fail;
+ *
+ * 2. compacting > pageblock_order pages does not improve memory fragmentation,
+ * skip them;
+ */
+static bool skip_isolation_on_order(int order, int target_order)
+{
+ return (target_order != -1 && order >= target_order) ||
+ order >= pageblock_order;
+}
+
/**
* isolate_migratepages_block() - isolate all migrate-able pages within
* a single pageblock
@@ -1009,7 +1024,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
/*
* Regardless of being on LRU, compound pages such as THP and
* hugetlbfs are not to be compacted unless we are attempting
- * an allocation much larger than the huge page size (eg CMA).
+ * an allocation larger than the compound page size.
* We can potentially save a lot of iterations if we skip them
* at once. The check is racy, but we can consider only valid
* values and the only danger is skipping too much.
@@ -1017,11 +1032,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (PageCompound(page) && !cc->alloc_contig) {
const unsigned int order = compound_order(page);
- if (likely(order <= MAX_ORDER)) {
- low_pfn += (1UL << order) - 1;
- nr_scanned += (1UL << order) - 1;
+ /*
+ * Skip based on page order and compaction target order
+ * and skip hugetlbfs pages.
+ */
+ if (skip_isolation_on_order(order, cc->order) ||
+ PageHuge(page)) {
+ if (order <= MAX_ORDER) {
+ low_pfn += (1UL << order) - 1;
+ nr_scanned += (1UL << order) - 1;
+ }
+ goto isolate_fail;
}
- goto isolate_fail;
}
/*
@@ -1144,17 +1166,18 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
goto isolate_abort;
}
}
+ }
- /*
- * folio become large since the non-locked check,
- * and it's on LRU.
- */
- if (unlikely(folio_test_large(folio) && !cc->alloc_contig)) {
- low_pfn += folio_nr_pages(folio) - 1;
- nr_scanned += folio_nr_pages(folio) - 1;
- folio_set_lru(folio);
- goto isolate_fail_put;
- }
+ /*
+ * Check LRU folio order under the lock
+ */
+ if (unlikely(skip_isolation_on_order(folio_order(folio),
+ cc->order) &&
+ !cc->alloc_contig)) {
+ low_pfn += folio_nr_pages(folio) - 1;
+ nr_scanned += folio_nr_pages(folio) - 1;
+ folio_set_lru(folio);
+ goto isolate_fail_put;
}

Why was this part moved out of the 'if (lruvec != locked)' block? If we hold the lru lock, then we do not need to check again, right?