Re: [PATCH] [0/6] HUGETLB memory commitment

From: Martin J. Bligh
Date: Thu Mar 25 2004 - 19:47:11 EST


> I think it's simply:
>
> - Make normal overcommit logic skip hugepages completely
>
> - Teach the overcommit_memory=2 logic that hugepages are basically
> "pinned", so subtract them from the arithmetic.
>
> And that's it. The hugepages are semantically quite different from normal
> memory (prefaulted, preallocated, unswappable) and we've deliberately
> avoided pretending otherwise.

It would be nice (to fix some of the posted problems) if hugepages didn't
have to be prefaulted ... if they had their own overcommit pool (that we
used whether normal overcommit was on or not), that'd be unnecessary.

Specifically:

1) SGI found that requesting oodles of large pages took forever.
2) NUMA allocation API wants to be able to specify policies, which
means not prefaulting them.

I'd agree that fixing stopping hugepages from using the main overcommit
pool is the first priority, but it'd be nice to go one stage further.

M.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/