Re: [PATCH v1 2/2] hugetlb: process multiple lists in gather_bootmem_prealloc_parallel

From: Muchun Song
Date: Tue Feb 13 2024 - 09:56:16 EST




> On Feb 13, 2024, at 19:13, Gang Li <gang.li@xxxxxxxxx> wrote:
>
> gather_bootmem_prealloc_node currently only process one list in
> huge_boot_pages array. So gather_bootmem_prealloc expects
> padata_do_multithreaded to run num_node_state(N_MEMORY) instances of
> gather_bootmem_prealloc_node to process all lists in huge_boot_pages.
>
> This works well in current padata_do_multithreaded implementation.
> It guarantees that size/min_chunk <= thread num <= max_threads.
>
> ```
> /* Ensure at least one thread when size < min_chunk. */
> nworks = max(job->size / max(job->min_chunk, job->align), 1ul);
> nworks = min(nworks, job->max_threads);
>
> ps.nworks = padata_work_alloc_mt(nworks, &ps, &works);
> ```
>
> However, the comment of padata_do_multithreaded API only promises a
> maximum value for the number of threads and does not specify a
> minimum value. Which may pass multiple nodes to
> gather_bootmem_prealloc_node and only one node will be processed.
>
> To avoid potential errors, introduce gather_bootmem_prealloc_parallel
> to handle the case where the number of threads does not meet the
> requirement of max_threads.
>
> Fixes: 0306f03dcbd7 ("hugetlb: parallelize 1G hugetlb initialization")
> Signed-off-by: Gang Li <ligang.bdlg@xxxxxxxxxxxxx>

Reviewed-by: Muchun Song <muchun.song@xxxxxxxxx>

Thanks.