[PATCH vmalloc] reduce purge_lock range and hold time of

From: zhouxianrong
Date: Sat Oct 15 2016 - 10:24:03 EST


From: z00281421 <z00281421@xxxxxxxxxxxxxxxxxxxx>

i think no need to place __free_vmap_area loop in purge_lock;
_free_vmap_area could be non-atomic operations with flushing tlb
but must be done after flush tlb. and the whole__free_vmap_area loops
also could be non-atomic operations. if so we could improve realtime
because the loop times sometimes is larg and spend a few time.

Signed-off-by: z00281421 <z00281421@xxxxxxxxxxxxxxxxxxxx>
---
mm/vmalloc.c | 14 ++++++++++++--
1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 91f44e7..9d9154d 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -661,13 +661,23 @@ static void __purge_vmap_area_lazy(unsigned long *start, unsigned long *end,
if (nr || force_flush)
flush_tlb_kernel_range(*start, *end);

+ spin_unlock(&purge_lock);
+
if (nr) {
+ /* the batch count should not be too small
+ ** because if vmalloc space is few free is first than alloc.
+ */
+ unsigned char batch = -1;
spin_lock(&vmap_area_lock);
- llist_for_each_entry_safe(va, n_va, valist, purge_list)
+ llist_for_each_entry_safe(va, n_va, valist, purge_list) {
__free_vmap_area(va);
+ if (!batch--) {
+ spin_unlock(&vmap_area_lock);
+ spin_lock(&vmap_area_lock);
+ }
+ }
spin_unlock(&vmap_area_lock);
}
- spin_unlock(&purge_lock);
}

/*
--
1.7.9.5