Re: [RFC V2 SLEB 00/14] The Enhanced(hopefully) Slab Allocator

From: Pekka Enberg
Date: Tue May 25 2010 - 05:19:17 EST


Hi Nick,

On Tue, May 25, 2010 at 11:16 AM, Nick Piggin <npiggin@xxxxxxx> wrote:
> I don't think SLUB ever proved itself very well. The selling points
> were some untestable handwaving about how queueing is bad and jitter
> is bad, ignoring the fact that queues could be shortened and periodic
> reaping disabled at runtime with SLAB style of allocator. It also
> has relied heavily on higher order allocations which put great strain
> on hugepage allocations and page reclaim (witness the big slowdown
> in low memory conditions when tmpfs was using higher order allocations
> via SLUB).

The main selling point for SLUB was NUMA. Has the situation changed?
Reliance on higher order allocations isn't that relevant if we're
anyway discussing ways to change allocation strategy.

On Tue, May 25, 2010 at 11:16 AM, Nick Piggin <npiggin@xxxxxxx> wrote:
> SLUB has not been able to displace SLAB for a long timedue to
> performance and higher order allocation problems.
>
> I think "clean code" is very important, but by far the hardest thing to
> get right by far is the actual allocation and freeing strategies. So
> it's crazy to base such a choice on code cleanliness. If that's the
> deciding factor, then I can provide a patch to modernise SLAB and then
> we can remove SLUB and start incremental improvements from there.

I'm more than happy to take in patches to clean up SLAB but I think
you're underestimating the required effort. What SLUB has going for
it:

- No NUMA alien caches
- No special lockdep handling required
- Debugging support is better
- Cpuset interractions are simpler
- Memory hotplug is more mature
- Much more contributors to SLUB than to SLAB

I was one of the people cleaning up SLAB when SLUB was merged and
based on that experience I'm strongly in favor of SLUB as a base.

Pekka
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/