Re: [PATCH v6 10/11] mm, compaction: require only min watermarks for non-costly orders

From: Joonsoo Kim
Date: Tue Aug 16 2016 - 02:10:51 EST


On Wed, Aug 10, 2016 at 11:12:25AM +0200, Vlastimil Babka wrote:
> The __compaction_suitable() function checks the low watermark plus a
> compact_gap() gap to decide if there's enough free memory to perform
> compaction. Then __isolate_free_page uses low watermark check to decide if
> particular free page can be isolated. In the latter case, using low watermark
> is needlessly pessimistic, as the free page isolations are only temporary. For
> __compaction_suitable() the higher watermark makes sense for high-order
> allocations where more freepages increase the chance of success, and we can
> typically fail with some order-0 fallback when the system is struggling to
> reach that watermark. But for low-order allocation, forming the page should not
> be that hard. So using low watermark here might just prevent compaction from
> even trying, and eventually lead to OOM killer even if we are above min
> watermarks.
>
> So after this patch, we use min watermark for non-costly orders in
> __compaction_suitable(), and for all orders in __isolate_free_page().
>
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
> Acked-by: Michal Hocko <mhocko@xxxxxxxx>
> ---
> mm/compaction.c | 6 +++++-
> mm/page_alloc.c | 2 +-
> 2 files changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 80eaf9fff114..0bba270f97ad 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1399,10 +1399,14 @@ static enum compact_result __compaction_suitable(struct zone *zone, int order,
> * isolation. We however do use the direct compactor's classzone_idx to
> * skip over zones where lowmem reserves would prevent allocation even
> * if compaction succeeds.
> + * For costly orders, we require low watermark instead of min for
> + * compaction to proceed to increase its chances.
> * ALLOC_CMA is used, as pages in CMA pageblocks are considered
> * suitable migration targets
> */
> - watermark = low_wmark_pages(zone) + compact_gap(order);
> + watermark = (order > PAGE_ALLOC_COSTLY_ORDER) ?
> + low_wmark_pages(zone) : min_wmark_pages(zone);
> + watermark += compact_gap(order);
> if (!__zone_watermark_ok(zone, 0, watermark, classzone_idx,
> ALLOC_CMA, wmark_target))
> return COMPACT_SKIPPED;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 621e4211ce16..a5c0f914ec00 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2492,7 +2492,7 @@ int __isolate_free_page(struct page *page, unsigned int order)
>
> if (!is_migrate_isolate(mt)) {
> /* Obey watermarks as if the page was being allocated */
> - watermark = low_wmark_pages(zone) + (1 << order);
> + watermark = min_wmark_pages(zone) + (1UL << order);

This '1 << order' also needs some comment. Why can't we use
compact_gap() in this case?

Thanks.