Re: [PATCH v2 1/2] mm: cma: allocate cma areas bottom-up

From: Roman Gushchin
Date: Mon Dec 21 2020 - 12:06:52 EST


On Sun, Dec 20, 2020 at 08:48:48AM +0200, Mike Rapoport wrote:
> On Thu, Dec 17, 2020 at 12:12:13PM -0800, Roman Gushchin wrote:
> > Currently cma areas without a fixed base are allocated close to the
> > end of the node. This placement is sub-optimal because of compaction:
> > it brings pages into the cma area. In particular, it can bring in hot
> > executable pages, even if there is a plenty of free memory on the
> > machine. This results in cma allocation failures.
> >
> > Instead let's place cma areas close to the beginning of a node.
> > In this case the compaction will help to free cma areas, resulting
> > in better cma allocation success rates.
> >
> > If there is enough memory let's try to allocate bottom-up starting
> > with 4GB to exclude any possible interference with DMA32. On smaller
> > machines or in a case of a failure, stick with the old behavior.
> >
> > 16GB vm, 2GB cma area:
> > With this patch:
> > [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> > [ 0.002928] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> > [ 0.002930] cma: Reserved 2048 MiB at 0x0000000100000000
> > [ 0.002931] hugetlb_cma: reserved 2048 MiB on node 0
> >
> > Without this patch:
> > [ 0.000000] Command line: root=/dev/vda3 rootflags=subvol=/root systemd.unified_cgroup_hierarchy=1 enforcing=0 console=ttyS0,115200 hugetlb_cma=2G
> > [ 0.002930] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
> > [ 0.002933] cma: Reserved 2048 MiB at 0x00000003c0000000
> > [ 0.002934] hugetlb_cma: reserved 2048 MiB on node 0
> >
> > v2:
> > - switched to memblock_set_bottom_up(true), by Mike
> > - start with 4GB, by Mike
> >
> > Signed-off-by: Roman Gushchin <guro@xxxxxx>
>
> With one nit below
>
> Reviewed-by: Mike Rapoport <rppt@xxxxxxxxxxxxx>
>
> > ---
> > mm/cma.c | 16 ++++++++++++++++
> > 1 file changed, 16 insertions(+)
> >
> > diff --git a/mm/cma.c b/mm/cma.c
> > index 7f415d7cda9f..21fd40c092f0 100644
> > --- a/mm/cma.c
> > +++ b/mm/cma.c
> > @@ -337,6 +337,22 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
> > limit = highmem_start;
> > }
> >
> > + /*
> > + * If there is enough memory, try a bottom-up allocation first.
> > + * It will place the new cma area close to the start of the node
> > + * and guarantee that the compaction is moving pages out of the
> > + * cma area and not into it.
> > + * Avoid using first 4GB to not interfere with constrained zones
> > + * like DMA/DMA32.
> > + */
> > + if (!memblock_bottom_up() &&
> > + memblock_end >= SZ_4G + size) {
>

Hi Mike!

> This seems short enough to fit a single line

Indeed. An updated version below.

Thank you for the review of the series!

I assume it's simpler to route both patches through the mm tree.
What do you think?

Thanks!

--