[PATCH 2/3] mm/sparse: Optimize sparse_add_one_section()

From: Baoquan He
Date: Wed Mar 20 2019 - 03:36:00 EST


Reorder the allocation of usemap and memmap since usemap allocation
is much smaller and simpler. Otherwise hard work is done to make
memmap ready, then have to rollback just because of usemap allocation
failure.

Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
---
mm/sparse.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index 0a0f82c5d969..054b99f74181 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -697,16 +697,17 @@ int __meminit sparse_add_one_section(int nid, unsigned long start_pfn,
ret = sparse_index_init(section_nr, nid);
if (ret < 0 && ret != -EEXIST)
return ret;
- ret = 0;
- memmap = kmalloc_section_memmap(section_nr, nid, altmap);
- if (!memmap)
- return -ENOMEM;
+
usemap = __kmalloc_section_usemap();
- if (!usemap) {
- __kfree_section_memmap(memmap, altmap);
+ if (!usemap)
+ return -ENOMEM;
+ memmap = kmalloc_section_memmap(section_nr, nid, altmap);
+ if (!memmap) {
+ kfree(usemap);
return -ENOMEM;
}

+ ret = 0;
ms = __pfn_to_section(start_pfn);
if (ms->section_mem_map & SECTION_MARKED_PRESENT) {
ret = -EEXIST;
--
2.17.2