[PATCH 4/4] mm, compaction: make full priority ignore pageblock suitability

From: Vlastimil Babka
Date: Tue Sep 06 2016 - 09:53:35 EST


Several people have reported premature OOMs for order-2 allocations (stack)
due to OOM rework in 4.7. In the scenario (parallel kernel build and dd writing
to two drives) many pageblocks get marked as Unmovable and compaction free
scanner struggles to isolate free pages. Joonsoo Kim pointed out that the free
scanner skips pageblocks that are not movable to prevent filling them and
forcing non-movable allocations to fallback to other pageblocks. Such heuristic
makes sense to help prevent long-term fragmentation, but premature OOMs are
relatively more urgent problem. As a compromise, this patch disables the
heuristic only for the ultimate compaction priority.

Reported-by: Ralf-Peter Rohbeck <Ralf-Peter.Rohbeck@xxxxxxxxxxx>
Reported-by: Arkadiusz Miskiewicz <a.miskiewicz@xxxxxxxxx>
Reported-by: Olaf Hering <olaf@xxxxxxxxx>
Suggested-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
Acked-by: Michal Hocko <mhocko@xxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
Cc: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
Cc: David Rientjes <rientjes@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
---
mm/compaction.c | 11 ++++++++---
mm/internal.h | 1 +
2 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/mm/compaction.c b/mm/compaction.c
index 29f6c49dc9c2..86d4d0bbfc7c 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -997,8 +997,12 @@ isolate_migratepages_range(struct compact_control *cc, unsigned long start_pfn,
#ifdef CONFIG_COMPACTION

/* Returns true if the page is within a block suitable for migration to */
-static bool suitable_migration_target(struct page *page)
+static bool suitable_migration_target(struct compact_control *cc,
+ struct page *page)
{
+ if (cc->ignore_block_suitable)
+ return true;
+
/* If the page is a large free page, then disallow migration */
if (PageBuddy(page)) {
/*
@@ -1083,7 +1087,7 @@ static void isolate_freepages(struct compact_control *cc)
continue;

/* Check the block is suitable for migration */
- if (!suitable_migration_target(page))
+ if (!suitable_migration_target(cc, page))
continue;

/* If isolation recently failed, do not retry */
@@ -1656,7 +1660,8 @@ static enum compact_result compact_zone_order(struct zone *zone, int order,
.classzone_idx = classzone_idx,
.direct_compaction = true,
.whole_zone = (prio == MIN_COMPACT_PRIORITY),
- .ignore_skip_hint = (prio == MIN_COMPACT_PRIORITY)
+ .ignore_skip_hint = (prio == MIN_COMPACT_PRIORITY),
+ .ignore_block_suitable = (prio == MIN_COMPACT_PRIORITY)
};
INIT_LIST_HEAD(&cc.freepages);
INIT_LIST_HEAD(&cc.migratepages);
diff --git a/mm/internal.h b/mm/internal.h
index 5214bf8e3171..537ac9951f5f 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -178,6 +178,7 @@ struct compact_control {
unsigned long last_migrated_pfn;/* Not yet flushed page being freed */
enum migrate_mode mode; /* Async or sync migration mode */
bool ignore_skip_hint; /* Scan blocks even if marked skip */
+ bool ignore_block_suitable; /* Scan blocks considered unsuitable */
bool direct_compaction; /* False from kcompactd or /proc/... */
bool whole_zone; /* Whole zone should/has been scanned */
int order; /* order a direct compactor needs */
--
2.9.3