Re: + mm-reclaim-madv_free-pages.patch added to -mm tree

From: Johannes Weiner
Date: Mon Mar 06 2017 - 10:56:22 EST


On Mon, Mar 06, 2017 at 12:03:44PM +0900, Minchan Kim wrote:
> On Fri, Mar 03, 2017 at 10:18:51AM -0500, Johannes Weiner wrote:
> > On Fri, Mar 03, 2017 at 11:52:37AM +0900, Minchan Kim wrote:
> > > On Tue, Feb 28, 2017 at 04:32:38PM -0800, akpm@xxxxxxxxxxxxxxxxxxxx wrote:
> > > >
> > > > The patch titled
> > > > Subject: mm: reclaim MADV_FREE pages
> > > > has been added to the -mm tree. Its filename is
> > > > mm-reclaim-madv_free-pages.patch
> > > >
> > > > This patch should soon appear at
> > > > http://ozlabs.org/~akpm/mmots/broken-out/mm-reclaim-madv_free-pages.patch
> > > > and later at
> > > > http://ozlabs.org/~akpm/mmotm/broken-out/mm-reclaim-madv_free-pages.patch
> > > >
> > > > Before you just go and hit "reply", please:
> > > > a) Consider who else should be cc'ed
> > > > b) Prefer to cc a suitable mailing list as well
> > > > c) Ideally: find the original patch on the mailing list and do a
> > > > reply-to-all to that, adding suitable additional cc's
> > > >
> > > > *** Remember to use Documentation/SubmitChecklist when testing your code ***
> > > >
> > > > The -mm tree is included into linux-next and is updated
> > > > there every 3-4 working days
> > > >
> > > > ------------------------------------------------------
> > > > From: Shaohua Li <shli@xxxxxx>
> > > > Subject: mm: reclaim MADV_FREE pages
> > > >
> > > > When memory pressure is high, we free MADV_FREE pages. If the pages are
> > > > not dirty in pte, the pages could be freed immediately. Otherwise we
> > > > can't reclaim them. We put the pages back to anonumous LRU list (by
> > > > setting SwapBacked flag) and the pages will be reclaimed in normal swapout
> > > > way.
> > > >
> > > > We use normal page reclaim policy. Since MADV_FREE pages are put into
> > > > inactive file list, such pages and inactive file pages are reclaimed
> > > > according to their age. This is expected, because we don't want to
> > > > reclaim too many MADV_FREE pages before used once pages.
> > > >
> > > > Based on Minchan's original patch
> > > >
> > > > Link: http://lkml.kernel.org/r/14b8eb1d3f6bf6cc492833f183ac8c304e560484.1487965799.git.shli@xxxxxx
> > > > Signed-off-by: Shaohua Li <shli@xxxxxx>
> > > > Acked-by: Minchan Kim <minchan@xxxxxxxxxx>
> > > > Acked-by: Michal Hocko <mhocko@xxxxxxxx>
> > > > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
> > > > Acked-by: Hillf Danton <hillf.zj@xxxxxxxxxxxxxxx>
> > > > Cc: Hugh Dickins <hughd@xxxxxxxxxx>
> > > > Cc: Rik van Riel <riel@xxxxxxxxxx>
> > > > Cc: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> > > > Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> > > > ---
> > >
> > > < snip >
> > >
> > > > @@ -1419,11 +1413,21 @@ static int try_to_unmap_one(struct page
> > > > VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page),
> > > > page);
> > > >
> > > > - if (!PageDirty(page)) {
> > > > + /*
> > > > + * swapin page could be clean, it has data stored in
> > > > + * swap. We can't silently discard it without setting
> > > > + * swap entry in the page table.
> > > > + */
> > > > + if (!PageDirty(page) && !PageSwapCache(page)) {
> > > > /* It's a freeable page by MADV_FREE */
> > > > dec_mm_counter(mm, MM_ANONPAGES);
> > > > - rp->lazyfreed++;
> > > > goto discard;
> > > > + } else if (!PageSwapBacked(page)) {
> > > > + /* dirty MADV_FREE page */
> > > > + set_pte_at(mm, address, pvmw.pte, pteval);
> > > > + ret = SWAP_DIRTY;
> > > > + page_vma_mapped_walk_done(&pvmw);
> > > > + break;
> > > > }
> > >
> > > There is no point to make this logic complicated with clean swapin-page.
> > >
> > > Andrew,
> > > Could you fold below patch into the mm-reclaim-madv_free-pages.patch
> > > if others are not against?
> > >
> > > Thanks.
> > >
> > > From 0c28f6560fbc4e65da4f4a8cc4664ab9f7b11cf3 Mon Sep 17 00:00:00 2001
> > > From: Minchan Kim <minchan@xxxxxxxxxx>
> > > Date: Fri, 3 Mar 2017 11:42:52 +0900
> > > Subject: [PATCH] mm: clean up lazyfree page handling
> > >
> > > We can make it simple to understand without need to be aware of
> > > clean-swapin page.
> > > This patch just clean up lazyfree page handling in try_to_unmap_one.
> > >
> > > Signed-off-by: Minchan Kim <minchan@xxxxxxxxxx>
> >
> > Agreed, this is a litle easier to follow.
> >
> > Acked-by: Johannes Weiner <hannes@xxxxxxxxxxx>
>
> Thanks, Johannes.
>
> >
> > > ---
> > > mm/rmap.c | 22 +++++++++++-----------
> > > 1 file changed, 11 insertions(+), 11 deletions(-)
> > >
> > > diff --git a/mm/rmap.c b/mm/rmap.c
> > > index bb45712..f7eab40 100644
> > > --- a/mm/rmap.c
> > > +++ b/mm/rmap.c
> > > @@ -1413,17 +1413,17 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> > > VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page),
> > > page);
> >
> > Since you're removing the PageSwapCache() check and we're now assuming
> > that !swapbacked is not in the swapcache, can you modify this to check
> > PageSwapBacked(page) != PageSwapCache(page)?
> >
> > Better yet, change it into a warning and SWAP_FAIL.
>
> Maybe, what you wanted is
>
> !!PageSwapBacked(page) != !!PageSwapCache(page)

Those testers return 0 or 1 ints, on x86 even bool.

> Personally, I prefer && style rather than equation expression
> in this case.

> @@ -1413,20 +1413,24 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
> * Store the swap location in the pte.
> * See handle_pte_fault() ...
> */
> - VM_BUG_ON_PAGE(!PageSwapCache(page) && PageSwapBacked(page),
> - page);
> + if (VM_WARN_ON_ONCE(PageSwapBacked(page) &&
> + !PageSwapCache(page))) {
> + ret = SWAP_FAIL;

But you're not adding the !swapbacked && swapcache case?

> + page_vma_mapped_walk_done(&pvmw);
> + break;
> + }

[...]

> - /*
> - * swapin page could be clean, it has data stored in
> - * swap. We can't silently discard it without setting
> - * swap entry in the page table.
> - */
> - if (!PageDirty(page) && !PageSwapCache(page)) {
> - /* It's a freeable page by MADV_FREE */
> - dec_mm_counter(mm, MM_ANONPAGES);
> - goto discard;
> - } else if (!PageSwapBacked(page)) {
> - /* dirty MADV_FREE page */
> + /* MADV_FREE page check */
> + if (!PageSwapBacked(page)) {
> + if (!PageDirty(page)) {
> + dec_mm_counter(mm, MM_ANONPAGES);
> + goto discard;
> + }

Andrew already has this, you might want to send the warning changes as
a separate patch on top of this one.