Re: [PATCH 4/6] mm/page_alloc: Drain the requested list first during bulk free

From: Vlastimil Babka
Date: Thu Feb 17 2022 - 04:42:13 EST


On 2/17/22 01:22, Mel Gorman wrote:
> Prior to the series, pindex 0 (order-0 MIGRATE_UNMOVABLE) was always
> skipped first and the precise reason is forgotten. A potential reason may
> have been to artificially preserve MIGRATE_UNMOVABLE but there is no reason
> why that would be optimal as it depends on the workload. The more likely
> reason is that it was less complicated to do a pre-increment instead of
> a post-increment in terms of overall code flow. As free_pcppages_bulk()
> now typically receives the pindex of the PCP list that exceeded high,
> always start draining that list.
>
> Signed-off-by: Mel Gorman <mgorman@xxxxxxxxxxxxxxxxxxx>

Reviewed-by: Vlastimil Babka <vbabka@xxxxxxx>

> ---
> mm/page_alloc.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index dfc347a58ea6..635a4e0f70b4 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1463,6 +1463,10 @@ static void free_pcppages_bulk(struct zone *zone, int count,
> * below while (list_empty(list)) loop.
> */
> count = min(pcp->count, count);
> +
> + /* Ensure requested pindex is drained first. */
> + pindex = pindex - 1;
> +
> while (count > 0) {
> struct list_head *list;
> int nr_pages;