Re: [PATCH v3 1/2] mm/page_alloc: use ac->high_zoneidx for classzone_idx

From: Joonsoo Kim
Date: Sun Mar 22 2020 - 23:50:56 EST


Hello, Baoquan.

2020ë 3ì 20ì (ê) ìí 7:30, Baoquan He <bhe@xxxxxxxxxx>ëì ìì:
>
>
> On 03/20/20 at 05:32pm, js1304@xxxxxxxxx wrote:
> > From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx>
> >
> > Currently, we use the zone index of preferred_zone which represents
> > the best matching zone for allocation, as classzone_idx. It has
> > a problem on NUMA systems when the lowmem reserve protection exists
> > for some zones on a node that do not exist on other nodes.
> >
> > In NUMA system, it can be possible that each node has different populated
> > zones. For example, node 0 could have DMA/DMA32/NORMAL/MOVABLE zone and
> > node 1 could have only NORMAL zone. In this setup, allocation request
> > initiated on node 0 and the one on node 1 would have different
> > classzone_idx, 3 and 2, respectively, since their preferred_zones are
> > different. If the allocation is local, there is no problem. However,
> > if it is handled by the remote node due to memory shortage, the problem
> > would happen.
>
> Hi Joonsoo,
>
> Not sure if adding one sentence into above paragraph would be make it
> easier to understand. Assume you are only talking about the high_zoneidx
> is MOVABLE_ZONE with calculation of gfp_zone(gfp_mask), since any other
> case doesn't have this problem. Please correct me if I am wrong.

You're right. This example is for the allocation request with
gfp_zone(gfp_mask),
MOVABLE_ZONE.

> In NUMA system, it can be possible that each node has different populated
> zones. For example, node 0 could have DMA/DMA32/NORMAL/MOVABLE zone and
> node 1 could have only NORMAL zone. In this setup, if we get high_zoneidx
> as 3 (namely MOVABLE zone), with gfp_zone(gfp_mask), allocation request
> initiated on node 0 and the one on node 1 would have different
> classzone_idx, 3 and 2, respectively, since their preferred_zones are
> different. If the allocation is local, there is no problem. However,
> if it is handled by the remote node due to memory shortage, the problem
> would happen.

I'm okay with your change but I try again to be better. Please check the
following rewritten commit message and please let me know if it is better
than before.

Thanks.

------------------------>8-------------------------------