Re: [PATCH v4 02/16] mm: Batch-copy PTE ranges during fork()

From: David Hildenbrand
Date: Wed Dec 20 2023 - 05:00:22 EST


On 20.12.23 10:57, Ryan Roberts wrote:
On 20/12/2023 09:51, Ryan Roberts wrote:
On 20/12/2023 09:17, David Hildenbrand wrote:
On 19.12.23 18:42, Ryan Roberts wrote:
On 19/12/2023 17:22, David Hildenbrand wrote:
On 19.12.23 09:30, Ryan Roberts wrote:
On 18/12/2023 17:47, David Hildenbrand wrote:
On 18.12.23 11:50, Ryan Roberts wrote:
Convert copy_pte_range() to copy a batch of ptes in one go. A given
batch is determined by the architecture with the new helper,
pte_batch_remaining(), and maps a physically contiguous block of memory,
all belonging to the same folio. A pte batch is then write-protected in
one go in the parent using the new helper, ptep_set_wrprotects() and is
set in one go in the child using the new helper, set_ptes_full().

The primary motivation for this change is to reduce the number of tlb
maintenance operations that the arm64 backend has to perform during
fork, as it is about to add transparent support for the "contiguous bit"
in its ptes. By write-protecting the parent using the new
ptep_set_wrprotects() (note the 's' at the end) function, the backend
can avoid having to unfold contig ranges of PTEs, which is expensive,
when all ptes in the range are being write-protected. Similarly, by
using set_ptes_full() rather than set_pte_at() to set up ptes in the
child, the backend does not need to fold a contiguous range once they
are all populated - they can be initially populated as a contiguous
range in the first place.

This code is very performance sensitive, and a significant amount of
effort has been put into not regressing performance for the order-0
folio case. By default, pte_batch_remaining() is compile constant 1,
which enables the compiler to simplify the extra loops that are added
for batching and produce code that is equivalent (and equally
performant) as the previous implementation.

This change addresses the core-mm refactoring only and a separate change
will implement pte_batch_remaining(), ptep_set_wrprotects() and
set_ptes_full() in the arm64 backend to realize the performance
improvement as part of the work to enable contpte mappings.

To ensure the arm64 is performant once implemented, this change is very
careful to only call ptep_get() once per pte batch.

The following microbenchmark results demonstate that there is no
significant performance change after this patch. Fork is called in a
tight loop in a process with 1G of populated memory and the time for the
function to execute is measured. 100 iterations per run, 8 runs
performed on both Apple M2 (VM) and Ampere Altra (bare metal). Tests
performed for case where 1G memory is comprised of order-0 folios and
case where comprised of pte-mapped order-9 folios. Negative is faster,
positive is slower, compared to baseline upon which the series is based:

| Apple M2 VM   | order-0 (pte-map) | order-9 (pte-map) |
| fork          |-------------------|-------------------|
| microbench    |    mean |   stdev |    mean |   stdev |
|---------------|---------|---------|---------|---------|
| baseline      |    0.0% |    1.1% |    0.0% |    1.2% |
| after-change  |   -1.0% |    2.0% |   -0.1% |    1.1% |

| Ampere Altra  | order-0 (pte-map) | order-9 (pte-map) |
| fork          |-------------------|-------------------|
| microbench    |    mean |   stdev |    mean |   stdev |
|---------------|---------|---------|---------|---------|
| baseline      |    0.0% |    1.0% |    0.0% |    0.1% |
| after-change  |   -0.1% |    1.2% |   -0.1% |    0.1% |

Tested-by: John Hubbard <jhubbard@xxxxxxxxxx>
Reviewed-by: Alistair Popple <apopple@xxxxxxxxxx>
Signed-off-by: Ryan Roberts <ryan.roberts@xxxxxxx>
---
    include/linux/pgtable.h | 80 +++++++++++++++++++++++++++++++++++
    mm/memory.c             | 92 ++++++++++++++++++++++++++---------------
    2 files changed, 139 insertions(+), 33 deletions(-)

diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index af7639c3b0a3..db93fb81465a 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -205,6 +205,27 @@ static inline int pmd_young(pmd_t pmd)
    #define arch_flush_lazy_mmu_mode()    do {} while (0)
    #endif
    +#ifndef pte_batch_remaining
+/**
+ * pte_batch_remaining - Number of pages from addr to next batch boundary.
+ * @pte: Page table entry for the first page.
+ * @addr: Address of the first page.
+ * @end: Batch ceiling (e.g. end of vma).
+ *
+ * Some architectures (arm64) can efficiently modify a contiguous batch of
ptes.
+ * In such cases, this function returns the remaining number of pages to
the end
+ * of the current batch, as defined by addr. This can be useful when
iterating
+ * over ptes.
+ *
+ * May be overridden by the architecture, else batch size is always 1.
+ */
+static inline unsigned int pte_batch_remaining(pte_t pte, unsigned long
addr,
+                        unsigned long end)
+{
+    return 1;
+}
+#endif

It's a shame we now lose the optimization for all other archtiectures.

Was there no way to have some basic batching mechanism that doesn't require
arch
specifics?

I tried a bunch of things but ultimately the way I've done it was the only way
to reduce the order-0 fork regression to 0.

My original v3 posting was costing 5% extra and even my first attempt at an
arch-specific version that didn't resolve to a compile-time constant 1 still
cost an extra 3%.



I'd have thought that something very basic would have worked like:

* Check if PTE is the same when setting the PFN to 0.
* Check that PFN is consecutive
* Check that all PFNs belong to the same folio

I haven't tried this exact approach, but I'd be surprised if I can get the
regression under 4% with this. Further along the series I spent a lot of time
having to fiddle with the arm64 implementation; every conditional and every
memory read (even when in cache) was a problem. There is just so little in the
inner loop that every instruction matters. (At least on Ampere Altra and Apple
M2).

Of course if you're willing to pay that 4-5% for order-0 then the benefit to
order-9 is around 10% in my measurements. Personally though, I'd prefer to play
safe and ensure the common order-0 case doesn't regress, as you previously
suggested.


I just hacked something up, on top of my beloved rmap cleanup/batching series. I
implemented very generic and simple batching for large folios (all PTE bits
except the PFN have to match).

Some very quick testing (don't trust each last % ) on Intel(R) Xeon(R) Silver
4210R CPU.

order-0: 0.014210 -> 0.013969

-> Around 1.7 % faster

order-9: 0.014373 -> 0.009149

-> Around 36.3 % faster

Well I guess that shows me :)

I'll do a review and run the tests on my HW to see if it concurs.


I pushed a simple compile fixup (we need pte_next_pfn()).

I've just been trying to compile and noticed this. Will take a look at your update.

Took a look; there will still be arch work needed; arm64 doesn't define
PFN_PTE_SHIFT because it defines set_ptes(). I'm not sure if there are other
arches that also don't define PFN_PTE_SHIFT (or pte_next_pfn()) if the math is
more complex) - it will need an audit.


Right, likely many that have their own set_ptes() implementation right now.

--
Cheers,

David / dhildenb