Re: [PATCHv2] mm/gup: Cache dev_pagemap while pinning pages

From: Kirill A. Shutemov
Date: Fri Oct 12 2018 - 07:00:29 EST


On Thu, Oct 11, 2018 at 04:24:02PM -0700, Dan Williams wrote:
> On Thu, Oct 11, 2018 at 11:00 AM Keith Busch <keith.busch@xxxxxxxxx> wrote:
> >
> > Getting pages from ZONE_DEVICE memory needs to check the backing device's
> > live-ness, which is tracked in the device's dev_pagemap metadata. This
> > metadata is stored in a radix tree and looking it up adds measurable
> > software overhead.
> >
> > This patch avoids repeating this relatively costly operation when
> > dev_pagemap is used by caching the last dev_pagemap while getting user
> > pages. The gup_benchmark kernel self test reports this reduces time to
> > get user pages to as low as 1/3 of the previous time.
> >
> > Cc: Kirill Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
> > Cc: Dave Hansen <dave.hansen@xxxxxxxxx>
> > Cc: Dan Williams <dan.j.williams@xxxxxxxxx>
> > Signed-off-by: Keith Busch <keith.busch@xxxxxxxxx>
>
> Other than the 2 comments below, this looks good to me:
>
> Reviewed-by: Dan Williams <dan.j.williams@xxxxxxxxx>

Looks good to me too:

Acked-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>

>
> [..]
> > diff --git a/mm/gup.c b/mm/gup.c
> > index 1abc8b4afff6..d2700dff6f66 100644
> > --- a/mm/gup.c
> > +++ b/mm/gup.c
> [..]
> > @@ -431,7 +430,22 @@ struct page *follow_page_mask(struct vm_area_struct *vma,
> > return no_page_table(vma, flags);
> > }
> >
> > - return follow_p4d_mask(vma, address, pgd, flags, page_mask);
> > + return follow_p4d_mask(vma, address, pgd, flags, ctx);
> > +}
> > +
> > +struct page *follow_page(struct vm_area_struct *vma, unsigned long address,
> > + unsigned int foll_flags)
> > +{
> > + struct page *page;
> > + struct follow_page_context ctx = {
> > + .pgmap = NULL,
> > + .page_mask = 0,
> > + };
>
> You don't need to init all members. It is defined that if you init at
> least one member then all non initialized members are set to zero, so
> you should be able to do " = { 0 }".
>
> > +
> > + page = follow_page_mask(vma, address, foll_flags, &ctx);
> > + if (ctx.pgmap)
> > + put_dev_pagemap(ctx.pgmap);
> > + return page;
> > }
> >
> > static int get_gate_page(struct mm_struct *mm, unsigned long address,
> > @@ -659,9 +673,9 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
> > unsigned int gup_flags, struct page **pages,
> > struct vm_area_struct **vmas, int *nonblocking)
> > {
> > - long i = 0;
> > - unsigned int page_mask;
> > + long ret = 0, i = 0;
> > struct vm_area_struct *vma = NULL;
> > + struct follow_page_context ctx = {};
>
> Does this have defined behavior? I would feel better with " = { 0 }"
> to be explicit.

Well, it's not allowed by the standart, but GCC allows this.
You can see a warning with -pedantic.

We use empty-list initializers a lot in the kernel:
$ git grep 'struct .*= {};' | wc -l
997

It should be fine.

--
Kirill A. Shutemov