[pchecks v1 3/4] Use raw_cpu_ops for refresh_cpu_vm_stats()

From: Christoph Lameter
Date: Mon Sep 23 2013 - 15:24:32 EST


We do not care about races for the expiration logic in
refresh_cpu_vm_stats(). Draining is a rare act after all.
No need to create too much overhead for that.

Use raw_cpu_ops there.

Signed-off-by: Christoph Lameter <cl@xxxxxxxxx>

Index: linux/mm/vmstat.c
===================================================================
--- linux.orig/mm/vmstat.c 2013-09-23 10:20:31.742262228 -0500
+++ linux/mm/vmstat.c 2013-09-23 10:20:31.738262268 -0500
@@ -439,6 +439,10 @@ static inline void fold_diff(int *diff)
* statistics in the remote zone struct as well as the global cachelines
* with the global counters. These could cause remote node cache line
* bouncing and will have to be only done when necessary.
+ *
+ * Note that we have to use raw_cpu ops here. The thread is pinned
+ * to a specific processor but the preempt checking logic does not
+ * know about this.
*/
static void refresh_cpu_vm_stats(void)
{
@@ -459,7 +463,7 @@ static void refresh_cpu_vm_stats(void)
global_diff[i] += v;
#ifdef CONFIG_NUMA
/* 3 seconds idle till flush */
- __this_cpu_write(p->expire, 3);
+ raw_cpu_write(p->expire, 3);
#endif
}
}
@@ -472,23 +476,23 @@ static void refresh_cpu_vm_stats(void)
* Check if there are pages remaining in this pageset
* if not then there is nothing to expire.
*/
- if (!__this_cpu_read(p->expire) ||
- !__this_cpu_read(p->pcp.count))
+ if (!raw_cpu_read(p->expire) ||
+ !raw_cpu_read(p->pcp.count))
continue;

/*
* We never drain zones local to this processor.
*/
if (zone_to_nid(zone) == numa_node_id()) {
- __this_cpu_write(p->expire, 0);
+ raw_cpu_write(p->expire, 0);
continue;
}


- if (__this_cpu_dec_return(p->expire))
+ if (raw_cpu_dec_return(p->expire))
continue;

- if (__this_cpu_read(p->pcp.count))
+ if (raw_cpu_read(p->pcp.count))
drain_zone_pages(zone, __this_cpu_ptr(&p->pcp));
#endif
}

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/