Re: [PATCH] vmscan: ZVC updates in shrink_active_list() can bedone once

From: Wu Fengguang
Date: Tue May 05 2009 - 09:13:22 EST


On Tue, May 05, 2009 at 08:33:50PM +0800, Johannes Weiner wrote:
> On Tue, May 05, 2009 at 07:44:55AM +0800, Wu Fengguang wrote:
> > This effectively lifts the unit of nr_inactive_* and pgdeactivate updates
> > from PAGEVEC_SIZE=14 to SWAP_CLUSTER_MAX=32.
>
> For __zone_reclaim() it will be >= SWAP_CLUSTER_MAX, depending on the
> allocation order.

Thanks for pointing out this. Changelog updated accordingly.

I'd expect its impact to be negletable, because 1024 pages are
not extraordinary large. But sure, that's much larger than 125 pages
- the maximum zone stats update threshold(zone_pcp->stat_threshold).

Thanks,
Fengguang
---
vmscan: ZVC updates in shrink_active_list() can be done once

This effectively lifts the unit of updates to nr_inactive_* and pgdeactivate
from PAGEVEC_SIZE=14 to SWAP_CLUSTER_MAX=32, or MAX_ORDER_NR_PAGES=1024 for
reclaim_zone().

CC: Johannes Weiner <hannes@xxxxxxxxxxx>
Acked-by: Rik van Riel <riel@xxxxxxxxxx>
Reviewed-by: Minchan Kim <minchan.kim@xxxxxxxxx>
Signed-off-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
---
mm/vmscan.c | 11 +++--------
1 file changed, 3 insertions(+), 8 deletions(-)

--- linux.orig/mm/vmscan.c
+++ linux/mm/vmscan.c
@@ -1228,7 +1228,6 @@ static void shrink_active_list(unsigned
struct scan_control *sc, int priority, int file)
{
unsigned long pgmoved;
- int pgdeactivate = 0;
unsigned long pgscanned;
LIST_HEAD(l_hold); /* The pages which were snipped off */
LIST_HEAD(l_inactive);
@@ -1257,7 +1256,7 @@ static void shrink_active_list(unsigned
__mod_zone_page_state(zone, NR_ACTIVE_ANON, -pgmoved);
spin_unlock_irq(&zone->lru_lock);

- pgmoved = 0;
+ pgmoved = 0; /* count referenced (mapping) mapped pages */
while (!list_empty(&l_hold)) {
cond_resched();
page = lru_to_page(&l_hold);
@@ -1291,7 +1290,7 @@ static void shrink_active_list(unsigned
*/
reclaim_stat->recent_rotated[!!file] += pgmoved;

- pgmoved = 0;
+ pgmoved = 0; /* count pages moved to inactive list */
while (!list_empty(&l_inactive)) {
page = lru_to_page(&l_inactive);
prefetchw_prev_lru_page(page, &l_inactive, flags);
@@ -1304,10 +1303,7 @@ static void shrink_active_list(unsigned
mem_cgroup_add_lru_list(page, lru);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
- __mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
spin_unlock_irq(&zone->lru_lock);
- pgdeactivate += pgmoved;
- pgmoved = 0;
if (buffer_heads_over_limit)
pagevec_strip(&pvec);
__pagevec_release(&pvec);
@@ -1315,9 +1311,8 @@ static void shrink_active_list(unsigned
}
}
__mod_zone_page_state(zone, NR_LRU_BASE + lru, pgmoved);
- pgdeactivate += pgmoved;
__count_zone_vm_events(PGREFILL, zone, pgscanned);
- __count_vm_events(PGDEACTIVATE, pgdeactivate);
+ __count_vm_events(PGDEACTIVATE, pgmoved);
spin_unlock_irq(&zone->lru_lock);
if (buffer_heads_over_limit)
pagevec_strip(&pvec);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/