[PATCH 2/6] vmtrunc: restore unmap_vmas zap_bytes

From: Hugh Dickins
Date: Sun Oct 03 2004 - 13:27:50 EST


The low-latency unmap_vmas patch silently moved the zap_bytes test
after the TLB finish and lockbreak and regather: why? That not only
makes zap_bytes redundant (might as well use ZAP_BLOCK_SIZE), it makes
the unmap_vmas level redundant too - it's all about saving TLB flushes
when unmapping a series of small vmas.

Move zap_bytes test back before the lockbreak, and delete the curious
comment that a small zap block size doesn't matter: it's true need_flush
prevents TLB flush when no page has been unmapped, but unmapping pages
in small blocks involves many more TLB flushes than in large blocks.

Signed-off-by: Hugh Dickins <hugh@xxxxxxxxxxx>
---

mm/memory.c | 9 +++------
1 files changed, 3 insertions(+), 6 deletions(-)

--- trunc1/mm/memory.c 2004-10-03 01:07:01.396013328 +0100
+++ trunc2/mm/memory.c 2004-10-03 01:07:12.957255752 +0100
@@ -506,10 +506,6 @@ static void unmap_page_range(struct mmu_
}

#ifdef CONFIG_PREEMPT
-/*
- * It's not an issue to have a small zap block size - TLB flushes
- * only happen once normally, due to the tlb->need_flush optimization.
- */
# define ZAP_BLOCK_SIZE (8 * PAGE_SIZE)
#else
/* No preempt: go for improved straight-line efficiency */
@@ -588,6 +584,9 @@ int unmap_vmas(struct mmu_gather **tlbp,

start += block;
zap_bytes -= block;
+ if ((long)zap_bytes > 0)
+ continue;
+
if (!atomic) {
int fullmm = tlb_is_full_mm(*tlbp);
tlb_finish_mmu(*tlbp, tlb_start, start);
@@ -595,8 +594,6 @@ int unmap_vmas(struct mmu_gather **tlbp,
*tlbp = tlb_gather_mmu(mm, fullmm);
tlb_start_valid = 0;
}
- if ((long)zap_bytes > 0)
- continue;
zap_bytes = ZAP_BLOCK_SIZE;
}
}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/