Re: [PATCH] [0/6] HUGETLB memory commitment

From: Andrew Morton
Date: Thu Mar 25 2004 - 19:43:56 EST


Keith Owens <kaos@xxxxxxx> wrote:
>
> FWIW, lkcd (crash dump) treats hugetlb pages as normal kernel pages and
> dumps them, which is pointless and wastes a lot of time. To avoid
> dumping these pages in lkcd, I had to add a PG_hugetlb flag. lkcd runs
> at the page level, not mm or vma, so VM_hugetlb was not available. In
> set_hugetlb_mem_size()
>
> for (j = 0; j < (HPAGE_SIZE / PAGE_SIZE); j++) {
> SetPageReserved(map);
> SetPageHugetlb(map);
> map++;
> }
>
> In dump_base.c, I changed kernel_page(), referenced_page() and
> unreferenced_page() to test for PageHugetlb() before PageReserved().

That makes sense.

> Since you are looking at identifying hugetlb pages, could any other
> code benefit from a PG_hugetlb flag?

In the overcommit code we don't actually have the page yet. We're asking
"do we have enough memory available to honour this mmap() invokation when
it later faults in real pages".

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/