Re: [PATCH v4 1/2] mm: migration: fix the FOLL_GET failure on following huge page

From: Alistair Popple
Date: Mon Aug 15 2022 - 01:17:07 EST


On Monday, 15 August 2022 2:40:48 PM AEST Wang, Haiyue wrote:
> > -----Original Message-----
> > From: Alistair Popple <apopple@xxxxxxxxxx>
> > Sent: Monday, August 15, 2022 12:29
> > To: linux-mm@xxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; Wang, Haiyue
<haiyue.wang@xxxxxxxxx>
> > Cc: akpm@xxxxxxxxxxxxxxxxxxxx; david@xxxxxxxxxx; linmiaohe@xxxxxxxxxx;
Huang, Ying
> > <ying.huang@xxxxxxxxx>; songmuchun@xxxxxxxxxxxxx;
naoya.horiguchi@xxxxxxxxx; alex.sierra@xxxxxxx; Wang,
> > Haiyue <haiyue.wang@xxxxxxxxx>
> > Subject: Re: [PATCH v4 1/2] mm: migration: fix the FOLL_GET failure on
following huge page
> >
> > On Monday, 15 August 2022 11:59:08 AM AEST Haiyue Wang wrote:
> > > Not all huge page APIs support FOLL_GET option, so the __NR_move_pages
> > > will fail to get the page node information for huge page.
> >
> > I think you should be explicit in the commit message about which functions
do
> > not support FOLL_GET as it's not obvious what support needs to be added
before
> > this fix can be reverted.
>
> Yes, make sense, will add them in new patch.

Actually while you're at it I think it would be good to include a description
of the impact of this failure in the commit message. Ie. You're answer to:

> What are the user-visible runtime effects of this bug?

As it documents what should be tested if this fix does actually ever get
reverted.

> >
> > Thanks.
> >
> > - Alistair
> >
> > > This is an temporary solution to mitigate the racing fix.
> > >
> > > After supporting follow huge page by FOLL_GET is done, this fix can be
> > > reverted safely.
> > >
> > > Fixes: 4cd614841c06 ("mm: migration: fix possible do_pages_stat_array
racing
> > with memory offline")
> > > Signed-off-by: Haiyue Wang <haiyue.wang@xxxxxxxxx>
> > > ---
> > > mm/migrate.c | 10 ++++++++--
> > > 1 file changed, 8 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/mm/migrate.c b/mm/migrate.c
> > > index 6a1597c92261..581dfaad9257 100644
> > > --- a/mm/migrate.c
> > > +++ b/mm/migrate.c
> > > @@ -1848,6 +1848,7 @@ static void do_pages_stat_array(struct mm_struct
*mm,
> > unsigned long nr_pages,
> > >
> > > for (i = 0; i < nr_pages; i++) {
> > > unsigned long addr = (unsigned long)(*pages);
> > > + unsigned int foll_flags = FOLL_DUMP;
> > > struct vm_area_struct *vma;
> > > struct page *page;
> > > int err = -EFAULT;
> > > @@ -1856,8 +1857,12 @@ static void do_pages_stat_array(struct mm_struct
*mm,
> > unsigned long nr_pages,
> > > if (!vma)
> > > goto set_status;
> > >
> > > + /* Not all huge page follow APIs support 'FOLL_GET' */
> > > + if (!is_vm_hugetlb_page(vma))
> > > + foll_flags |= FOLL_GET;
> > > +
> > > /* FOLL_DUMP to ignore special (like zero) pages */
> > > - page = follow_page(vma, addr, FOLL_GET | FOLL_DUMP);
> > > + page = follow_page(vma, addr, foll_flags);
> > >
> > > err = PTR_ERR(page);
> > > if (IS_ERR(page))
> > > @@ -1865,7 +1870,8 @@ static void do_pages_stat_array(struct mm_struct
*mm,
> > unsigned long nr_pages,
> > >
> > > if (page && !is_zone_device_page(page)) {
> > > err = page_to_nid(page);
> > > - put_page(page);
> > > + if (foll_flags & FOLL_GET)
> > > + put_page(page);
> > > } else {
> > > err = -ENOENT;
> > > }
> > >
> >
> >
> >
>
>