On 20.02.24 18:24, David Hildenbrand wrote:Thanks for reminding. Yes, the VMAs in the tree built via __mt_dup() are the
On 27.10.23 05:38, Peng Zhang wrote:
In dup_mmap(), using __mt_dup() to duplicate the old maple tree and then
directly replacing the entries of VMAs in the new maple tree can result
in better performance. __mt_dup() uses DFS pre-order to duplicate the
maple tree, so it is efficient.
The average time complexity of __mt_dup() is O(n), where n is the number
of VMAs. The proof of the time complexity is provided in the commit log
that introduces __mt_dup(). After duplicating the maple tree, each element
is traversed and replaced (ignoring the cases of deletion, which are rare).
Since it is only a replacement operation for each element, this process is
also O(n).
Analyzing the exact time complexity of the previous algorithm is
challenging because each insertion can involve appending to a node, pushing
data to adjacent nodes, or even splitting nodes. The frequency of each
action is difficult to calculate. The worst-case scenario for a single
insertion is when the tree undergoes splitting at every level. If we
consider each insertion as the worst-case scenario, we can determine that
the upper bound of the time complexity is O(n*log(n)), although this is a
loose upper bound. However, based on the test data, it appears that the
actual time complexity is likely to be O(n).
As the entire maple tree is duplicated using __mt_dup(), if dup_mmap()
fails, there will be a portion of VMAs that have not been duplicated in
the maple tree. To handle this, we mark the failure point with
XA_ZERO_ENTRY. In exit_mmap(), if this marker is encountered, stop
releasing VMAs that have not been duplicated after this point.
There is a "spawn" in byte-unixbench[1], which can be used to test the
performance of fork(). I modified it slightly to make it work with
different number of VMAs.
Below are the test results. The first row shows the number of VMAs.
The second and third rows show the number of fork() calls per ten seconds,
corresponding to next-20231006 and the this patchset, respectively. The
test results were obtained with CPU binding to avoid scheduler load
balancing that could cause unstable results. There are still some
fluctuations in the test results, but at least they are better than the
original performance.
21 121 221 421 821 1621 3221 6421 12821 25621 51221
112100 76261 54227 34035 20195 11112 6017 3161 1606 802 393
114558 83067 65008 45824 28751 16072 8922 4747 2436 1233 599
2.19% 8.92% 19.88% 34.64% 42.37% 44.64% 48.28% 50.17% 51.68% 53.74% 52.42%
[1] https://github.com/kdlucas/byte-unixbench/tree/master
Signed-off-by: Peng Zhang <zhangpeng.00@xxxxxxxxxxxxx>
Suggested-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
Reviewed-by: Liam R. Howlett <Liam.Howlett@xxxxxxxxxx>
---
include/linux/mm.h | 11 +++++++++++
kernel/fork.c | 40 +++++++++++++++++++++++++++++-----------
mm/internal.h | 11 -----------
mm/memory.c | 7 ++++++-
mm/mmap.c | 9 ++++++---
5 files changed, 52 insertions(+), 26 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 14d5aaff96d0..e9111ec5808c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -996,6 +996,17 @@ static inline int vma_iter_bulk_alloc(struct vma_iterator *vmi,
return mas_expected_entries(&vmi->mas, count);
}
+static inline int vma_iter_clear_gfp(struct vma_iterator *vmi,
+ unsigned long start, unsigned long end, gfp_t gfp)
+{
+ __mas_set_range(&vmi->mas, start, end - 1);
+ mas_store_gfp(&vmi->mas, NULL, gfp);
+ if (unlikely(mas_is_err(&vmi->mas)))
+ return -ENOMEM;
+
+ return 0;
+}
+
/* Free any unused preallocations */
static inline void vma_iter_free(struct vma_iterator *vmi)
{
diff --git a/kernel/fork.c b/kernel/fork.c
index 1e6c656e0857..1552ee66517b 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -650,7 +650,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
int retval;
unsigned long charge = 0;
LIST_HEAD(uf);
- VMA_ITERATOR(old_vmi, oldmm, 0);
VMA_ITERATOR(vmi, mm, 0);
uprobe_start_dup_mmap();
@@ -678,16 +677,22 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
goto out;
khugepaged_fork(mm, oldmm);
- retval = vma_iter_bulk_alloc(&vmi, oldmm->map_count);
- if (retval)
+ /* Use __mt_dup() to efficiently build an identical maple tree. */
+ retval = __mt_dup(&oldmm->mm_mt, &mm->mm_mt, GFP_KERNEL);
+ if (unlikely(retval))
goto out;
mt_clear_in_rcu(vmi.mas.tree);
- for_each_vma(old_vmi, mpnt) {
+ for_each_vma(vmi, mpnt) {
struct file *file;
vma_start_write(mpnt);
We used to call vma_start_write() on the *old* VMA, to prevent any kind of page faults in
the old MM while we are duplicating PTEs (and COW-share pages).
See
commit fb49c455323ff8319a123dd312be9082c49a23a5
Author: Suren Baghdasaryan <surenb@xxxxxxxxxx>
Date: Sat Jul 8 12:12:12 2023 -0700
fork: lock VMAs of the parent process when forking
When forking a child process, the parent write-protects anonymous pages
and COW-shares them with the child being forked using copy_present_pte().
We must not take any concurrent page faults on the source vma's as they
are being processed, as we expect both the vma and the pte's behind it
to be stable. For example, the anon_vma_fork() expects the parents
vma->anon_vma to not change during the vma copy.
Unless I am missing something, we now call vma_start_write() on the *new* VMA?
If that is the case, this is broken and needs fixing; likely, going over all
VMAs in the old_mm and calling vma_start_write().
But maybe there is some magic going on that I am missing :)
... likely the magic is that the new tree links the same VMAs (we are not duplicating the VMAs before vm_area_dup()), so we are indeed locking the MM in the old_mm (that is temporarily linked into the new MM).
If that's the case, all good :)