Re: [PATCH] mm/sparse: Consistently do not zero memmap

From: David Hildenbrand
Date: Wed Oct 30 2019 - 09:32:01 EST


On 30.10.19 14:11, Vincent Whitchurch wrote:
sparsemem without VMEMMAP has two allocation paths to allocate the
memory needed for its memmap (done in sparse_mem_map_populate()).

In one allocation path (sparse_buffer_alloc() succeeds), the memory is
not zeroed (since it was previously allocated with
memblock_alloc_try_nid_raw()).

In the other allocation path (sparse_buffer_alloc() fails and
sparse_mem_map_populate() falls back to memblock_alloc_try_nid()), the
memory is zeroed.

AFAICS this difference does not appear to be on purpose. If the code is
supposed to work with non-initialized memory (__init_single_page() takes
care of zeroing the struct pages which are actually used), we should
consistently not zero the memory, to avoid masking bugs.

I agree

Acked-by: David Hildenbrand <david@xxxxxxxxxx>


(I noticed this because on my ARM64 platform, with 1 GiB of memory the
first [and only] section is allocated from the zeroing path while with
2 GiB of memory the first 1 GiB section is allocated from the
non-zeroing path.)

Signed-off-by: Vincent Whitchurch <vincent.whitchurch@xxxxxxxx>
---
mm/sparse.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/sparse.c b/mm/sparse.c
index f6891c1992b1..01e467adc219 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -458,7 +458,7 @@ struct page __init *__populate_section_memmap(unsigned long pfn,
if (map)
return map;
- map = memblock_alloc_try_nid(size,
+ map = memblock_alloc_try_nid_raw(size,
PAGE_SIZE, addr,
MEMBLOCK_ALLOC_ACCESSIBLE, nid);
if (!map)



--

Thanks,

David / dhildenb