Re: [PATCH v2 4/6] mm/mm_init.c: remove meaningless calculation of zone->managed_pages in free_area_init_core()

From: Mike Rapoport
Date: Thu Mar 28 2024 - 05:53:56 EST


On Thu, Mar 28, 2024 at 04:32:38PM +0800, Baoquan He wrote:
> On 03/25/24 at 10:56pm, Baoquan He wrote:
> >
> > /*
> > - * Set an approximate value for lowmem here, it will be adjusted
> > - * when the bootmem allocator frees pages into the buddy system.
> > - * And all highmem pages will be managed by the buddy system.
> > + * Initialize zone->managed_pages as 0 , it will be reset
> > + * when memblock allocator frees pages into buddy system.
> > */
> > - zone_init_internals(zone, j, nid, freesize);
> > + zone_init_internals(zone, j, nid, 0);
>
> Here, we should initialize zone->managed_pages as zone->present_pages
> because later page_group_by_mobility_disabled need be set according to
> zone->managed_pages. Otherwise page_group_by_mobility_disabled will be
> set to 1 always. I will sent out v3.

With zone->managed_pages set to zone->present_pages we won't account for
the reserved memory for initialization of page_group_by_mobility_disabled.

As watermarks are still not initialized at the time build_all_zonelists()
is called, we may use nr_all_pages - nr_kernel_pages instead of
nr_free_zone_pages(), IMO.

> From a17b0921b4bd00596330f61ee9ea4b82386a9fed Mon Sep 17 00:00:00 2001
> From: Baoquan He <bhe@xxxxxxxxxx>
> Date: Thu, 28 Mar 2024 16:20:15 +0800
> Subject: [PATCH] mm/mm_init.c: set zone's ->managed_pages as ->present_pages
> for now
> Content-type: text/plain
>
> Because page_group_by_mobility_disabled need be set according to zone's
> managed_pages later.
>
> Signed-off-by: Baoquan He <bhe@xxxxxxxxxx>
> ---
> mm/mm_init.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index cc24e7958c0c..dd875f943cbb 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1561,7 +1561,7 @@ static void __init free_area_init_core(struct pglist_data *pgdat)
> * Initialize zone->managed_pages as 0 , it will be reset
> * when memblock allocator frees pages into buddy system.
> */
> - zone_init_internals(zone, j, nid, 0);
> + zone_init_internals(zone, j, nid, zone->present_pages);
>
> if (!size)
> continue;
> --
> 2.41.0
>
>
> >
> > if (!size)
> > continue;
> > @@ -1915,6 +1878,7 @@ void __init free_area_init(unsigned long *max_zone_pfn)
> > check_for_memory(pgdat);
> > }
> >
> > + calc_nr_kernel_pages();
> > memmap_init();
> >
> > /* disable hash distribution for systems with a single node */
> > --
> > 2.41.0
> >
>

--
Sincerely yours,
Mike.