Re: [PATCH] mm,hugetlb: take hugetlb_lock before decrementing h->resv_huge_pages

From: Mike Kravetz
Date: Mon Oct 17 2022 - 22:09:38 EST


On 10/17/22 20:25, Rik van Riel wrote:
> The h->*_huge_pages counters are protected by the hugetlb_lock, but
> alloc_huge_page has a corner case where it can decrement the counter
> outside of the lock.
>
> This could lead to a corrupted value of h->resv_huge_pages, which we
> have observed on our systems.
>
> Take the hugetlb_lock before decrementing h->resv_huge_pages to avoid
> a potential race.
>
> Fixes: a88c76954804 ("mm: hugetlb: fix hugepage memory leak caused by wrong reserve count")
> Cc: stable@xxxxxxxxxx
> Cc: Naoya Horiguchi <n-horiguchi@xxxxxxxxxxxxx>
> Cc: Glen McCready <gkmccready@xxxxxxxx>
> Cc: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
> Cc: Muchun Song <songmuchun@xxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Rik van Riel <riel@xxxxxxxxxxx>
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)

Thanks Rik! That case did slip through the cracks.

Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx>
--
Mike Kravetz

>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b586cdd75930..dede0337c07c 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -2924,11 +2924,11 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
> if (!page)
> goto out_uncharge_cgroup;
> + spin_lock_irq(&hugetlb_lock);
> if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> SetHPageRestoreReserve(page);
> h->resv_huge_pages--;
> }
> - spin_lock_irq(&hugetlb_lock);
> list_add(&page->lru, &h->hugepage_activelist);
> set_page_refcounted(page);
> /* Fall through */
> --
> 2.37.2
>
>