Re: [RFC 1/3] lib: copy_{from,to}_user using gup & kmap_atomic()

From: afzal mohammed
Date: Fri Jun 12 2020 - 09:56:05 EST


Hi,

On Fri, Jun 12, 2020 at 02:02:13PM +0200, Arnd Bergmann wrote:
> On Fri, Jun 12, 2020 at 12:18 PM afzal mohammed <afzal.mohd.ma@xxxxxxxxx> wrote:

> > Roughly a one-third drop in performance. Disabling highmem improves
> > performance only slightly.

> There are probably some things that can be done to optimize it,
> but I guess most of the overhead is from the page table operations
> and cannot be avoided.

Ingo's series did a follow_page() first, then as a fallback did it
invoke get_user_pages(), i will try that way as well.

Yes, i too feel get_user_pages_fast() path is the most time consuming,
will instrument & check.

> What was the exact 'dd' command you used, in particular the block size?
> Note that by default, 'dd' will request 512 bytes at a time, so you usually
> only access a single page. It would be interesting to see the overhead with
> other typical or extreme block sizes, e.g. '1', '64', '4K', '64K' or '1M'.

It was the default(512), more test results follows (in MB/s),

512 1K 4K 16K 32K 64K 1M

w/o series 30 46 89 95 90 85 65

w/ series 22 36 72 79 78 75 61

perf drop 26% 21% 19% 16% 13% 12% 6%

Hmm, results ain't that bad :)

> If you want to drill down into where exactly the overhead is (i.e.
> get_user_pages or kmap_atomic, or something different), using
> 'perf record dd ..', and 'perf report' may be helpful.

Let me dig deeper & try to find out where the major overhead and try
to figure out ways to reduce it.

One reason to disable highmem & test (results mentioned earlier) was
to make kmap_atomic() very lightweight, that was not making much
difference, around 3% only.

> > +static int copy_chunk_from_user(unsigned long from, int len, void *arg)
> > +{
> > + unsigned long *to_ptr = arg, to = *to_ptr;
> > +
> > + memcpy((void *) to, (void *) from, len);
> > + *to_ptr += len;
> > + return 0;
> > +}
> > +
> > +static int copy_chunk_to_user(unsigned long to, int len, void *arg)
> > +{
> > + unsigned long *from_ptr = arg, from = *from_ptr;
> > +
> > + memcpy((void *) to, (void *) from, len);
> > + *from_ptr += len;
> > + return 0;
> > +}
>
> Will gcc optimize away the indirect function call and inline everything?
> If not, that would be a small part of the overhead.

i think not, based on objdump, i will make these & wherever other
places possible inline & see the difference.

> > + num_pages = DIV_ROUND_UP((unsigned long)from + n, PAGE_SIZE) -
> > + (unsigned long)from / PAGE_SIZE;
>
> Make sure this doesn't turn into actual division operations but uses shifts.
> It might even be clearer here to open-code the shift operation so readers
> can see what this is meant to compile into.

Okay

>
> > + pages = kmalloc_array(num_pages, sizeof(*pages), GFP_KERNEL | __GFP_ZERO);
> > + if (!pages)
> > + goto end;
>
> Another micro-optimization may be to avoid the kmalloc for the common case,
> e.g. anything with "num_pages <= 64", using an array on the stack.

Okay

> > + ret = get_user_pages_fast((unsigned long)from, num_pages, 0, pages);
> > + if (ret < 0)
> > + goto free_pages;
> > +
> > + if (ret != num_pages) {
> > + num_pages = ret;
> > + goto put_pages;
> > + }
>
> I think this is technically incorrect: if get_user_pages_fast() only
> gets some of the
> pages, you should continue with the short buffer and return the number
> of remaining
> bytes rather than not copying anything. I think you did that correctly
> for a failed
> kmap_atomic(), but this has to use the same logic.

yes, will fix that.


Regards
afzal