RE: [linus:master] [iov_iter] c9eec08bac: vm-scalability.throughput -16.9% regression

From: David Laight
Date: Thu Nov 16 2023 - 05:08:38 EST


From: Linus Torvalds
> Sent: 15 November 2023 20:07
...
> - our current "memcpy_orig" fallback does unrolled copy loops, and
> the rep_movs_alternative fallback obviously doesn't.
>
> It's not clear that the unrolled copy loops matter for the in-kernel
> kinds of copies, but who knows. The memcpy_orig code is definitely
> trying to be smarter in some other ways too. So the fallback should
> try a *bit* harder than I did, and not just with the whole "don't try
> to handle exceptions" issue I mentioned.

I'm pretty sure the unrolled copy (and other unrolled loops)
just wastes I-cache and slows things down cold-cache.

With out of order execute on most x86 cpu (except atoms) you
don't really have to worry about the memory latency.
So get the loop control instructions to run in parallel with
the memory access ones and you can copy one word every clock.
I never managed a single clock loop, but you can get a two
clock loop (with 2 reads and 2 writes in it).

So unrolling once is typically enough.

You can also ignore alignment, the extra cost is minimal (on
Intel cpu at least). I think it requires an extra u-op when
the copy crosses a cache line boundadry.

On haswell (which is now quite old) both 'rep movsb' and
'rep movsq' copy 16 bytes/clock unless the destination
is 32 byte aligned when they copy 32 bytes/clock.
Source alignment make no different, neither does byte
alignment.

Another -Os stupidity is 'push $x; pop %reg' to load
a signed byte constant.

David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)