Re: [patch -mm] mm, hugetlb: schedule when potentially allocating many hugepages

From: Mike Kravetz
Date: Thu Jun 08 2017 - 00:32:09 EST


On 06/07/2017 09:03 PM, David Rientjes wrote:
> A few hugetlb allocators loop while calling the page allocator and can
> potentially prevent rescheduling if the page allocator slowpath is not
> utilized.
>
> Conditionally schedule when large numbers of hugepages can be allocated.
>
> Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx>
> ---
> Based on -mm only to prevent merge conflicts with
> "mm/hugetlb.c: warn the user when issues arise on boot due to hugepages"
>
> mm/hugetlb.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1754,6 +1754,7 @@ static int gather_surplus_pages(struct hstate *h, int delta)
> break;
> }
> list_add(&page->lru, &surplus_list);
> + cond_resched();
> }
> allocated += i;
>
> @@ -2222,6 +2223,7 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
> } else if (!alloc_fresh_huge_page(h,
> &node_states[N_MEMORY]))
> break;
> + cond_resched();
> }
> if (i < h->max_huge_pages) {
> char buf[32];
> @@ -2364,6 +2366,7 @@ static unsigned long set_max_huge_pages(struct hstate *h, unsigned long count,
> ret = alloc_fresh_gigantic_page(h, nodes_allowed);
> else
> ret = alloc_fresh_huge_page(h, nodes_allowed);
> + cond_resched();

Are not the following lines immediately before the above huge page allocation
in set_max_huge_pages, or am I looking at an incorrect version of the file?

/* yield cpu to avoid soft lockup */
cond_resched();

--
Mike Kravetz

> spin_lock(&hugetlb_lock);
> if (!ret)
> goto out;
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@xxxxxxxxxx For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>
>