[RFC PATCH v2 0/6] slub: Delay freezing of CPU partial slabs

From: chengming . zhou
Date: Sat Oct 21 2023 - 10:43:57 EST


From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>

Changes in RFC v2:
- Reuse PG_workingset bit to keep track of whether slub is on the
per-node partial list, as suggested by Matthew Wilcox.
- Fix OOM problem on kernel without CONFIG_SLUB_CPU_PARTIAL, which
is caused by leak of partial slabs when get_partial_node().
- Add a patch to simplify acquire_slab().
- Reorder patches a little.
- v1: https://lore.kernel.org/all/20231017154439.3036608-1-chengming.zhou@xxxxxxxxx/

1. Problem
==========
Now we have to freeze the slab when get from the node partial list, and
unfreeze the slab when put to the node partial list. Because we need to
rely on the node list_lock to synchronize the "frozen" bit changes.

This implementation has some drawbacks:

- Alloc path: twice cmpxchg_double.
It has to get some partial slabs from node when the allocator has used
up the CPU partial slabs. So it freeze the slab (one cmpxchg_double)
with node list_lock held, put those frozen slabs on its CPU partial
list. Later ___slab_alloc() will cmpxchg_double try-loop again if that
slab is picked to use.

- Alloc path: amplified contention on node list_lock.
Since we have to synchronize the "frozen" bit changes under the node
list_lock, the contention of slab (struct page) can be transferred
to the node list_lock. On machine with many CPUs in one node, the
contention of list_lock will be amplified by all CPUs' alloc path.

The current code has to workaround this problem by avoiding using
cmpxchg_double try-loop, which will just break and return when
contention of page encountered and the first cmpxchg_double failed.
But this workaround has its own problem.

- Free path: redundant unfreeze.
__slab_free() will freeze and cache some slabs on its partial list,
and flush them to the node partial list when exceed, which has to
unfreeze those slabs again under the node list_lock. Actually we
don't need to freeze slab on CPU partial list, in which case we
can save the unfreeze cmpxchg_double operations in flush path.

2. Solution
===========
We solve these problems by leaving slabs unfrozen when moving out of
the node partial list and on CPU partial list, so "frozen" bit is 0.

These partial slabs won't be manipulate concurrently by alloc path,
the only racer is free path, which may manipulate its list when !inuse.
So we need to introduce another synchronization way to avoid it, we
reuse PG_workingset to keep track of whether the slab is on node partial
list or not, only in that case we can manipulate the slab list.

The slab will be delay frozen when it's picked to actively use by the
CPU, it becomes full at the same time, in which case we still need to
rely on "frozen" bit to avoid manipulating its list. So the slab will
be frozen only when activate use and be unfrozen only when deactivate.

3. Testing
==========
We just did some simple testing on a server with 128 CPUs (2 nodes) to
compare performance for now.

- perf bench sched messaging -g 5 -t -l 100000
baseline RFC
7.042s 6.966s
7.022s 7.045s
7.054s 6.985s

- stress-ng --rawpkt 128 --rawpkt-ops 100000000
baseline RFC
2.42s 2.15s
2.45s 2.16s
2.44s 2.17s

It shows above there is about 10% improvement on stress-ng rawpkt
testcase, although no much improvement on perf sched bench testcase.

Thanks for any comment and code review!

Chengming Zhou (6):
slub: Keep track of whether slub is on the per-node partial list
slub: Prepare __slab_free() for unfrozen partial slab out of node
partial list
slub: Don't freeze slabs for cpu partial
slub: Simplify acquire_slab()
slub: Introduce get_cpu_partial()
slub: Optimize deactivate_slab()

include/linux/page-flags.h | 2 +
mm/slab.h | 19 +++
mm/slub.c | 245 +++++++++++++++++++------------------
3 files changed, 150 insertions(+), 116 deletions(-)

--
2.20.1