Re: [PATCH RFC v3 0/9] SLUB percpu array caches and maple tree nodes

From: Christoph Lameter (Ampere)
Date: Wed Nov 29 2023 - 15:16:23 EST


On Wed, 29 Nov 2023, Vlastimil Babka wrote:

At LSF/MM I've mentioned that I see several use cases for introducing
opt-in percpu arrays for caching alloc/free objects in SLUB. This is my
first exploration of this idea, speficially for the use case of maple
tree nodes. The assumptions are:

Hohumm... So we are not really removing SLAB but merging SLAB features into SLUB. In addition to per cpu slabs, we now have per cpu queues.

- percpu arrays will be faster thank bulk alloc/free which needs
relatively long freelists to work well. Especially in the freeing case
we need the nodes to come from the same slab (or small set of those)

Percpu arrays require the code to handle individual objects. Handling freelists in partial SLABS means that numerous objects can be handled at once by handling the pointer to the list of objects.

In order to make the SLUB in page freelists work better you need to have larger freelist and that comes with larger page sizes. I.e. boot with
slub_min_order=5 or so to increase performance.

Also this means increasing TLB pressure. The in page freelists of SLUB cause objects from the same page be served. The SLAB queueing approach
results in objects being mixed from any address and thus neighboring objects may require more TLB entries.

- preallocation for the worst case of needed nodes for a tree operation
that can't reclaim due to locks is wasteful. We could instead expect
that most of the time percpu arrays would satisfy the constained
allocations, and in the rare cases it does not we can dip into
GFP_ATOMIC reserves temporarily. So instead of preallocation just
prefill the arrays.

The partial percpu slabs could already do the same.

- NUMA locality of the nodes is not a concern as the nodes of a
process's VMA tree end up all over the place anyway.

NUMA locality is already controlled by the user through the node specification for percpu slabs. All objects coming from the same in page freelist of SLUB have the same NUMA locality which simplifies things.

If you would consider NUMA locality for the percpu array then you'd be
back to my beloved alien caches. We were not able to avoid that when we tuned SLAB for maximum performance.

Patch 5 adds the per-cpu array caches support. Locking is stolen from
Mel's recent page allocator's pcplists implementation so it can avoid
disabling IRQs and just disable preemption, but the trylocks can fail in
rare situations - in most cases the locks are uncontended so the locking
should be cheap.

Ok the locking is new but the design follows basic SLAB queue handling.