Re: [PATCH 3/3] mm: slub: Default slub_max_order to 0

From: Andrea Arcangeli
Date: Thu May 12 2011 - 14:00:44 EST


On Thu, May 12, 2011 at 12:38:34PM -0500, Christoph Lameter wrote:
> I dont think that will change much since huge pages are at MAX_ORDER size.
> Either you can get them or not. The challenge with the small order
> allocations is that they require contiguous memory. Compaction is likely
> not as effective as the prior mechanism that did opportunistic reclaim of
> neighboring pages.

THP requires contiguous pages too, the issue is similar, and worse
with THP, but THP enables compaction by default, likely this only
happens with compaction off. We've really to differentiate between
compaction on and off, it makes world of difference (a THP enabled
kernel with compaction off, also runs into swap storms and temporary
hangs all the time, it's probably the same issue of SLUB=y
COMPACTION=n). At least THP didn't activate kswapd, kswapd running
lumpy too makes things worse as it'll probably keep running in the
background after the direct reclaim fails.

The original reports talks about kerenls with SLUB=y and
COMPACTION=n. Not sure if anybody is having trouble with SLUB=y
COMPACTION=y...

Compaction is more effective than the prior mechanism too (prior
mechanism is lumpy reclaim) and it doesn't cause VM disruptions that
ignore all referenced information and takes down anything it finds in
the way.

I think when COMPACTION=n, lumpy either should go away, or only be
activated by __GFP_REPEAT so that only hugetlbfs makes use of
it. Increasing nr_hugepages is ok to halt the system for a while but
when all allocations are doing that, system becomes unusable, kind of
livelocked.

BTW, it comes to mind in patch 2, SLUB should clear __GFP_REPEAT too
(not only __GFP_NOFAIL). Clearing __GFP_WAIT may be worth it or not
with COMPACTION=y, definitely good idea to clear __GFP_WAIT unless
lumpy is restricted to __GFP_REPEAT|__GFP_NOFAIL.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/