Re: [PATCH -mm 17/25] Mlocked Pages are non-reclaimable

From: Lee Schermerhorn
Date: Tue Jun 10 2008 - 17:43:12 EST


On Tue, 2008-06-10 at 17:14 -0400, Rik van Riel wrote:
> On Tue, 10 Jun 2008 05:31:30 +0200
> Nick Piggin <npiggin@xxxxxxx> wrote:
>
> > If we eventually run out of page flags on 32 bit, then sure this might be
> > one we could look at geting rid of. Once the code has proven itself.
>
> Yes, after the code has proven stable, we can probably get
> rid of the PG_mlocked bit and use only PG_unevictable to mark
> these pages.
>
> Lee, Kosaki-san, do you see any problem with that approach?
> Is the PG_mlocked bit really necessary for non-debugging
> purposes?
>

Well, it does speed up the check for mlocked pages in page_reclaimable()
[now page_evictable()?] as we don't have to walk the reverse map to
determine that a page is mlocked. In many places where we currently
test page_reclaimable(), we really don't want to and maybe can't walk
the reverse map.

Unless you're evisioning even larger rework, the PG_unevictable flag
[formerly PG_noreclaim, right?] is analogous to PG_active. It's only
set when the page is on the corresponding lru list or being held
isolated from it, temporarily. See isolate_lru_page() and
putback_lru_page() and users thereof--such as mlock_vma_page(). Again,
I have seen what changes you're making here, so maybe that's all
changing. But, currently, PG_unevictable would not be a replacement for
PG_mlocked.

Anyway, let's see what you come up with before we tackle this.

Couple of related items:

+ 26-rc5-mm1 + a small fix to the double unlock_page() in
shrink_page_list() has been running for a couple of hours on my 32G,
16cpu ia64 numa platform w/o error. Seems to have survived the merge
into -mm, despite the issues Andrew has raised.

+ on same platform, Mel Gorman's mminit debug code is reporting that
we're using 22 page flags with Noreclaim, Mlock and PAGEFLAGS_EXTENDED
configured.

Lee

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/