Re: [PATCH v2 1/9] mm/compaction: skip useless pfn when updating cached pfn

From: Joonsoo Kim
Date: Mon Sep 07 2015 - 01:35:37 EST


On Mon, Aug 24, 2015 at 11:07:12AM +0200, Vlastimil Babka wrote:
> On 08/24/2015 04:19 AM, Joonsoo Kim wrote:
> >Cached pfn is used to determine the start position of scanner
> >at next compaction run. Current cached pfn points the skipped pageblock
> >so we uselessly checks whether pageblock is valid for compaction and
> >skip-bit is set or not. If we set scanner's cached pfn to next pfn of
> >skipped pageblock, we don't need to do this check.
> >
> >Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> >---
> > mm/compaction.c | 13 ++++++-------
> > 1 file changed, 6 insertions(+), 7 deletions(-)
> >
> >diff --git a/mm/compaction.c b/mm/compaction.c
> >index 6ef2fdf..c2d3d6a 100644
> >--- a/mm/compaction.c
> >+++ b/mm/compaction.c
> >@@ -261,10 +261,9 @@ void reset_isolation_suitable(pg_data_t *pgdat)
> > */
> > static void update_pageblock_skip(struct compact_control *cc,
> > struct page *page, unsigned long nr_isolated,
> >- bool migrate_scanner)
> >+ unsigned long pfn, bool migrate_scanner)
> > {
> > struct zone *zone = cc->zone;
> >- unsigned long pfn;
> >
> > if (cc->ignore_skip_hint)
> > return;
> >@@ -277,8 +276,6 @@ static void update_pageblock_skip(struct compact_control *cc,
> >
> > set_pageblock_skip(page);
> >
> >- pfn = page_to_pfn(page);
> >-
> > /* Update where async and sync compaction should restart */
> > if (migrate_scanner) {
> > if (pfn > zone->compact_cached_migrate_pfn[0])
> >@@ -300,7 +297,7 @@ static inline bool isolation_suitable(struct compact_control *cc,
> >
> > static void update_pageblock_skip(struct compact_control *cc,
> > struct page *page, unsigned long nr_isolated,
> >- bool migrate_scanner)
> >+ unsigned long pfn, bool migrate_scanner)
> > {
> > }
> > #endif /* CONFIG_COMPACTION */
> >@@ -509,7 +506,8 @@ isolate_fail:
> >
> > /* Update the pageblock-skip if the whole pageblock was scanned */
> > if (blockpfn == end_pfn)
> >- update_pageblock_skip(cc, valid_page, total_isolated, false);
> >+ update_pageblock_skip(cc, valid_page, total_isolated,
> >+ end_pfn, false);
>
> In isolate_freepages_block() this means we actually go logically
> *back* one pageblock, as the direction is opposite? I know it's not
> an issue after the redesign patch so you wouldn't notice it when
> testing the whole series. But there's a non-zero chance that the
> smaller fixes are merged first and the redesign later...

Hello, Vlastimil.
Sorry for long delay. I was on vacation. :)
I will fix it next time.

Btw, if possible, could you review the patchset in detail? or do you
have another plan on compaction improvement? Please let me know your
position to determine future plan of this patchset.

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/