[PATCH 0/3, v2] mprotect() and working set sampling optimizations

From: Ingo Molnar
Date: Wed Nov 14 2012 - 04:19:02 EST


Ok, people suggested to split out the change_protection() modification
into a third patch.

This series implements an mprotect() optimization that also
helps improve the quality of working set scanning:

- working set scanning gets faster

- we can scan with a touched-page rate, instead of with a
virtual-memory proportional rate (within limits).

This is already part of numa/core, but wanted to send it out
separately as well, to get specific feedback for the mprotect()
bits.

Thanks,

Ingo

---
Ingo Molnar (1):
mm: Optimize the TLB flush of sys_mprotect() and change_protection()
users

Peter Zijlstra (2):
mm: Count the number of pages affected in change_protection()
sched, numa, mm: Count WS scanning against present PTEs, not virtual
memory ranges

include/linux/hugetlb.h | 8 ++++++--
include/linux/mm.h | 6 +++---
kernel/sched/fair.c | 37 +++++++++++++++++++++----------------
mm/hugetlb.c | 10 ++++++++--
mm/mprotect.c | 46 ++++++++++++++++++++++++++++++++++------------
5 files changed, 72 insertions(+), 35 deletions(-)

--
1.7.11.7

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/