Re: [PATCH v2] mm/page_alloc: Leave IRQs enabled for per-cpu page allocations

From: Hugh Dickins
Date: Tue Nov 08 2022 - 10:47:31 EST


On Tue, 8 Nov 2022, Mel Gorman wrote:
> On Tue, Nov 08, 2022 at 01:40:48AM -0800, Hugh Dickins wrote:
> > On Mon, 7 Nov 2022, Mel Gorman wrote:
> > > On Sun, Nov 06, 2022 at 08:42:32AM -0800, Hugh Dickins wrote:
> > > > On Fri, 4 Nov 2022, Mel Gorman wrote:
> > > > What I'm really trying to do is fix
> > > > the bug in Linus's rmap/TLB series, and its interaction with my
> > > > rmap series, and report back on his series (asking for temporary
> > > > drop), before next-20221107 goes down in flames.
> > > >
> > > > I'd advocate for dropping this patch of yours too; but if it's giving
> > > > nobody else any trouble, I can easily continue to patch it out.
> > > >
> > >
> > > Given that you tested the patch against v6.1-rc3, it's clear that the
> > > patch on its own causes problems. Having a reproduction case will help
> > > me figure out why.
> >
> > Sorry for appearing to ignore your requests all day, Mel, but I just
> > had slightly more confidence in debugging it here, than in conveying
> > all the details of my load (some other time), and my config, and
> > actually enabling you to reproduce it. Had to focus.
> >
>
> Ok, understood. If you ever get the chance to give me even a rough
> description, I'd appreciate it but I understand that it's a distraction
> at the moment. Thanks for taking the time to debug this in your test
> environment.

I have sent it out two or three times before (I prefer privately, so my
limited shell scripting skills are not on public view!): later in the day
I'll look out the last time I sent it, and just forward that to you.
No doubt I'll have tweaked it here and there in my own usage since then,
but that will be good enough to show what I try for.

Wonderful if it could get into something like mmtests, but I should
warn that attempts to incorporate it into other frameworks have not
been successful in the past. Maybe it just needs too much handholding:
getting the balance right, so that it's stressing without quite OOMing,
is difficult in any new environment. Perhaps it should restart after
OOM, I just never tried to go that way.

>
> > Got it at last: free_unref_page_list() has been surviving on the
> > "next" in its list_for_each_entry_safe() for years(?), without doing
> > a proper list_del() in that block: only with your list_del() before
> > free_one_page() did it start to go so very wrong. (Or was there any
> > way in which it might already have been wrong, and needs backport?)
> >
>
> I think it happened to work by coincidence since forever because it was
> always adding to the same list. Even though the temporary list was
> thrashed, it is always either ignored or reinitialised.
>
> I've made this a standalone patch which is at the end of the mail. I can
> change the Reported-by to a Signed-off-by if you agree. While it doesn't
> fix anything today, it may still be worth documenting in git history why
> that list_del exists.

Yes, worth separating out to its own git commit. Just continue with what
you already have, Reported-by me, Signed-off-by you, thanks for asking.

>
> > Here's a few things to fold into your patch: I've moved your
> > list_del() up to cover both cases, that's the important fix;
> > but prior to finding that, I did notice a "locked_zone = NULL"
> > needed, and was very disappointed when that didn't fix the issues;
>
> This is a real fix but it also should happen to work properly which is
> less than ideal because it's fragile.

I thought that if the next page's zone matched the stale locked_zone,
then it would head into free_unref_page_commit() with NULL pcp, and
oops there? But I've certainly never seen that happen (despite first
assuming it was responsible for my crashes), so maybe I read it wrong.

>
> > zone instead of page_zone(page), batch_count = 0, lock hold times
> > were just improvements I noticed along the way.
> >
>
> The first is a small optimisation, the second addresses a corner case where
> the lock may be released/reacquired too soon after switching from one zone to
> another and the comment fix is valid. I've simply folded these in directly.
>
> The standalone patch is below, I'm rerunning tests before posting a
> short v3 series.

Great, thanks Mel.

Hugh

>
> Thanks!
>
> --8<--
> mm/page_alloc: Always remove pages from temporary list
>
> free_unref_page_list() has neglected to remove pages properly from the list
> of pages to free since forever. It works by coincidence because list_add
> happened to do the right thing adding the pages to just the PCP lists.
> However, a later patch added pages to either the PCP list or the zone list
> but only properly deleted the page from the list in one path leading to
> list corruption and a subsequent failure. As a preparation patch, always
> delete the pages from one list properly before adding to another. On its
> own, this fixes nothing although it adds a fractional amount of overhead
> but is critical to the next patch.
>
> Reported-by: Hugh Dickins <hughd@xxxxxxxxxx>
> Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>
> ---
> mm/page_alloc.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 218b28ee49ed..1ec54173b8d4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3546,6 +3546,8 @@ void free_unref_page_list(struct list_head *list)
> list_for_each_entry_safe(page, next, list, lru) {
> struct zone *zone = page_zone(page);
>
> + list_del(&page->lru);
> +
> /* Different zone, different pcp lock. */
> if (zone != locked_zone) {
> if (pcp)
>
> --
> Mel Gorman
> SUSE Labs