On Tue, Jan 24, 2012 at 01:21:36PM -0500, Rik van Riel wrote:When built with CONFIG_COMPACTION, kswapd does not try to free
contiguous pages.
balance_pgdat() gets its order from wakeup_kswapd(). This does not apply
to THP because kswapd does not get woken for THP but it should be woken
up for allocations like jumbo frames or order-1.
As kswapd does no memory compaction itself, this patch still makes
sense but I found the changelog misleading.
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 2880396..0398fab 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1512,6 +1512,7 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
unsigned long nr_writeback = 0;
isolate_mode_t reclaim_mode = ISOLATE_INACTIVE;
struct zone *zone = mz->zone;
+ int order = 0;
while (unlikely(too_many_isolated(zone, file, sc))) {
congestion_wait(BLK_RW_ASYNC, HZ/10);
@@ -1522,8 +1523,10 @@ shrink_inactive_list(unsigned long nr_to_scan, struct mem_cgroup_zone *mz,
}
set_reclaim_mode(priority, sc, false);
- if (sc->reclaim_mode& RECLAIM_MODE_LUMPYRECLAIM)
+ if (sc->reclaim_mode& RECLAIM_MODE_LUMPYRECLAIM) {
reclaim_mode |= ISOLATE_ACTIVE;
+ order = sc->order;
+ }
lru_add_drain();
This is a nit-pick but I would far prefer if you did not bypass
sc->order like this and instead changes isolate_lru_pages to do a
if (!order || !(sc->reclaim_mode& RECLAIM_MODE_LUMPYRECLAIM))
continue;
That would very clearly mark where LUMPYRECLAIM takes effect in
isolate_lru_pages() and makes deleting LUMPYRECLAIM easier in the
future.
The second effect of this change is a non-obvious side-effect. kswapd
will now isolate fewer pages per cycle because it will isolate
SWAP_CLUSTER_MAX pages instead of SWAP_CLUSTER_MAX<<order which it
potentially does currently. This is not wrong as such and may be
desirable to limit how much reclaim kswapd does but potentially it
impacts success rates for compaction. As this does not apply to THP,
it will be difficult to detect but bear in mind if we see an increase
in high-order allocation failures after this patch is merged. I am
not suggesting a change here but it would be nice to note in the
changelog if there is a new version of this patch.
@@ -2922,8 +2939,6 @@ out:
/* If balanced, clear the congested flag */
zone_clear_flag(zone, ZONE_CONGESTED);
- if (i<= *classzone_idx)
- balanced += zone->present_pages;
}
Why is this being deleted? It is still used by pgdat_balanced().