Re: [RFC][PATCH] HWPOISON: remove the unsafe __set_page_locked()

From: Nick Piggin
Date: Sun Sep 27 2009 - 19:01:28 EST


On Sun, Sep 27, 2009 at 10:57:29PM +0100, Hugh Dickins wrote:
> On Sun, 27 Sep 2009, Nick Piggin wrote:
> > On Sun, Sep 27, 2009 at 05:26:25PM +0100, Hugh Dickins wrote:
> > >
> > > I don't particularly like adding a GFP_LOCKED just for this, and I
> > > don't particularly like having to remember to unlock the thing on the
> > > various(?) error paths between getting the page and adding it to cache.
> >
> > God no, please no more crazy branches in the page allocator.
> >
> > I'm going to resubmit my patches to allow 0-ref page allocations,
> > so the pagecache will be able to work with those to do what we
> > want here.
> >
> > > But it is a good idea, and if doing it that way would really close a
> > > race window which checking page->mapping (or whatever) cannot (I'm
> > > simply not sure about that), then it would seem the best way to go.
> >
> > Yep, seems reasonable: the ordering is no technical burden, and a
> > simple comment pointing to hwpoison will keep it maintainable.
>
> You move from "God no" to "Yep, seems reasonable"!
>
> I think perhaps you couldn't bring yourself to believe that I was
> giving any support to Andi's GFP_LOCKED idea. Pretend I did not!
>
> I'll assume we stick with the "God no", and we'll see how what
> you come up with affects what they want.

Well, yes, I mean "no" to a GFP_LOCKED... If you follow me :)

Reasonable being the basic idea of setting up our flags before we
increment page count, although of course we'd want to see how all
the error cases etc pan out.

There is no real rush AFAIKS to fix this one single pagecache site
while we have problems with slab allocators and all other unaudited
places that nonatomically modify page flags with an elevated
page reference ... just mark HWPOISON as broken for the moment, or
cut it down to do something much simpler I guess?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/