Re: [PATCH] arm64: mm: fix linear mapping mem access performace degradation

From: Mike Rapoport
Date: Mon Jun 27 2022 - 05:49:40 EST


Please don't post HTML.

On Mon, Jun 27, 2022 at 05:24:10PM +0800, guanghui.fgh wrote:
> Thanks.
>
> 在 2022/6/27 14:34, Mike Rapoport 写道:
>
> On Sun, Jun 26, 2022 at 07:10:15PM +0800, Guanghui Feng wrote:
>
> The arm64 can build 2M/1G block/sectiion mapping. When using DMA/DMA32 zone
> (enable crashkernel, disable rodata full, disable kfence), the mem_map will
> use non block/section mapping(for crashkernel requires to shrink the region
> in page granularity). But it will degrade performance when doing larging
> continuous mem access in kernel(memcpy/memmove, etc).
>
> There are many changes and discussions:
> commit 031495635b46
> commit 1a8e1cef7603
> commit 8424ecdde7df
> commit 0a30c53573b0
> commit 2687275a5843
>
> Please include oneline summary of the commit. (See section "Describe your
> changes" in Documentation/process/submitting-patches.rst)
>
> OK, I will add oneline summary in the git commit messages.
>
> This patch changes mem_map to use block/section mapping with crashkernel.
> Firstly, do block/section mapping(normally 2M or 1G) for all avail mem at
> mem_map, reserve crashkernel memory. And then walking pagetable to split
> block/section mapping to non block/section mapping(normally 4K) [[[only]]]
> for crashkernel mem.
>
> This already happens when ZONE_DMA/ZONE_DMA32 are disabled. Please explain
> why is it Ok to change the way the memory is mapped with
> ZONE_DMA/ZONE_DMA32 enabled.
>
> In short:
>
> 1.building all avail mem with block/section mapping(normally 1G/2M) without
> inspecting crashkernel
> 2. Reserve crashkernel mem as same as previous doing
> 3. only change the crashkernle mem mapping to normal mapping(normally 4k).
> With this method, there are block/section mapping as more as possible.

This does not answer the question why changing the way the memory is mapped
when there is ZONE_DMA/DMA32 and crashkernel won't cause a regression.

--
Sincerely yours,
Mike.