Re: [PATCH v3] mm/hugetlb: fix memory offline with hugepage size > memory block size

From: Gerald Schaefer
Date: Fri Sep 23 2016 - 06:36:42 EST


On Thu, 22 Sep 2016 11:12:06 -0700
Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx> wrote:

> On 09/22/2016 09:29 AM, Gerald Schaefer wrote:
> > static void dissolve_free_huge_page(struct page *page)
> > {
> > + struct page *head = compound_head(page);
> > + struct hstate *h = page_hstate(head);
> > + int nid = page_to_nid(head);
> > +
> > spin_lock(&hugetlb_lock);
> > - if (PageHuge(page) && !page_count(page)) {
> > - struct hstate *h = page_hstate(page);
> > - int nid = page_to_nid(page);
> > - list_del(&page->lru);
> > - h->free_huge_pages--;
> > - h->free_huge_pages_node[nid]--;
> > - h->max_huge_pages--;
> > - update_and_free_page(h, page);
> > - }
> > + list_del(&head->lru);
> > + h->free_huge_pages--;
> > + h->free_huge_pages_node[nid]--;
> > + h->max_huge_pages--;
> > + update_and_free_page(h, head);
> > spin_unlock(&hugetlb_lock);
> > }
>
> Do you need to revalidate anything once you acquire the lock? Can this,
> for instance, race with another thread doing vm.nr_hugepages=0? Or a
> thread faulting in and allocating the large page that's being dissolved?
>

Yes, good point. I was relying on the range being isolated, but that only
seems to be checked in dequeue_huge_page_node(), as introduced with the
original commit. So this would only protect against anyone allocating the
hugepage at this point. This is also somehow expected, since we already
are beyond the "point of no return" in offline_pages().

vm.nr_hugepages=0 seems to be an issue though, as set_max_hugepages()
will not care about isolation, and so I guess we could have a race here
and double-free the hugepage. Revalidation of at least PageHuge() after
taking the lock should protect from that, not sure about page_count(),
but I think I'll just check both which will give the same behaviour as
before.

Will send v4, after thinking a bit more on the page reservation point
brought up by Mike.