Re: [PATCH 1/1] mm/mmu_gather: limit tlb batch count and add schedule point in tlb_batch_pages_flush

From: wangjianxing
Date: Wed Mar 16 2022 - 22:22:16 EST


On 03/16/2022 04:57 PM, Peter Zijlstra wrote:
This seems like a really complicated way of writing something like the
below...

diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index afb7185ffdc4..b382e86c1b47 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -47,8 +47,17 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
struct mmu_gather_batch *batch;
for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
- free_pages_and_swap_cache(batch->pages, batch->nr);
- batch->nr = 0;
+ struct page_struct *pages = batch->pages;
+
+ do {
+ int nr = min(512, batch->nr);
+
+ free_pages_and_swap_cache(pages, nr);
+ pages += nr;
+ batch->nr -= nr;
+
+ cond_resched();
+ } while (batch->nr);
}
tlb->active = &tlb->local;
}
Yeah, it looks nicer.

I will resubmit the patch.