Re: [PATCH v2] mm: show total hugetlb memory consumption in /proc/meminfo

From: Michal Hocko
Date: Wed Nov 22 2017 - 04:11:07 EST


On Tue 21-11-17 16:27:38, Mike Kravetz wrote:
> On 11/21/2017 11:59 AM, Roman Gushchin wrote:
[...]
> > What we can do, is to rename "count" into "nr_huge_pages", like:
> >
> > for_each_hstate(h) {
> > unsigned long nr_huge_pages = h->nr_huge_pages;
> >
> > total += (PAGE_SIZE << huge_page_order(h)) * nr_huge_pages;
> >
> > if (h == &default_hstate)
> > seq_printf(m,
> > "HugePages_Total: %5lu\n"
> > "HugePages_Free: %5lu\n"
> > "HugePages_Rsvd: %5lu\n"
> > "HugePages_Surp: %5lu\n"
> > "Hugepagesize: %8lu kB\n",
> > nr_huge_pages,
> > h->free_huge_pages,
> > h->resv_huge_pages,
> > h->surplus_huge_pages,
> > (PAGE_SIZE << huge_page_order(h)) / 1024);
> > }
> >
> > seq_printf(m, "Hugetlb: %8lu kB\n", total / 1024);
> >
> > But maybe taking a lock is not a bad idea, because it will also
> > guarantee consistency between other numbers (like HugePages_Free) as well,
> > which is not true right now.
>
> You are correct in that there is no consistency guarantee for the numbers
> with the default huge page size today. However, I am not really a fan of
> taking the lock for that guarantee. IMO, the above code is fine.

I agree

> This discussion reminds me that ideally there should be a per-hstate lock.
> My guess is that the global lock is a carry over from the days when only
> a single huge page size was supported. In practice, I don't think this is
> much of an issue as people typically only use a single huge page size. But,
> if anyone thinks is/may be an issue I am happy to make the changes.

Well, it kind of makes sense but I am not sure it is worth bothering.

--
Michal Hocko
SUSE Labs