[PATCH] mm: Fix to infinite churning of mlocked page

From: Minchan Kim
Date: Wed Aug 19 2009 - 19:57:18 EST


Mlocked page might lost the isolatation race.
It cause the page to clear PG_mlocked while it remains
in VM_LOCKED vma. It means it can be put [in]active list.
We can rescue it by try_to_unmap in shrink_page_list.

But now, As Wu Fengguang pointed out, vmscan have a bug.
If the page has PG_referenced, it can't reach try_to_unmap
in shrink_page_list but put into active list. If the page
is referenced repeatedly, it can remain [in]active list
without moving unevictable list.

This patch can fix it.

Reported-by: Wu Fengguang <fengguang.wu@xxxxxxxxx>
Signed-off-by: Minchan Kim <minchan.kim@xxxxxxxxx>
Cc: KOSAKI Motohiro <<kosaki.motohiro@xxxxxxxxxxxxxx>
Cc: Lee Schermerhorn <lee.schermerhorn@xxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxx>
---
mm/rmap.c | 1 +
mm/vmscan.c | 9 +++++++--
2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 7d6fe4e..28aafe2 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -363,6 +363,7 @@ static int page_referenced_one(struct page *page,
*/
if (vma->vm_flags & VM_LOCKED) {
*mapcount = 1; /* break early from loop */
+ *vm_flags |= VM_LOCKED;
goto out_unmap;
}

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 00596b9..70a63c4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -630,9 +630,14 @@ static unsigned long shrink_page_list(struct list_head *page_list,

referenced = page_referenced(page, 1,
sc->mem_cgroup, &vm_flags);
- /* In active use or really unfreeable? Activate it. */
+ /*
+ * In active use or really unfreeable? Activate it.
+ * If page which have PG_mlocked lost isoltation race,
+ * try_to_unmap moves it to unevictable list
+ */
if (sc->order <= PAGE_ALLOC_COSTLY_ORDER &&
- referenced && page_mapping_inuse(page))
+ referenced && page_mapping_inuse(page)
+ && !(vm_flags & VM_LOCKED))
goto activate_locked;

/*
--
1.5.4.3


--
Kind regards,
Minchan Kim
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/