Re: [PATCH 02/12] mm, slub: add CONFIG_SLUB_TINY

From: Hyeonggon Yoo
Date: Thu Nov 24 2022 - 06:33:26 EST


On Mon, Nov 21, 2022 at 06:11:52PM +0100, Vlastimil Babka wrote:
> For tiny systems that have used SLOB until now, SLUB might be
> impractical due to its higher memory usage. To help with that, introduce
> an option CONFIG_SLUB_TINY that modifies SLUB to use less memory.
> This is done by sacrificing scalability, security and debugging
> features, therefore not recommended for any system with more than 16MB
> RAM.
>
> This commit introduces the option and uses it to set other related
> options in a way that reduces memory usage.
>
> Signed-off-by: Vlastimil Babka <vbabka@xxxxxxx>
> ---
> mm/Kconfig | 21 +++++++++++++++++----
> mm/Kconfig.debug | 2 +-
> 2 files changed, 18 insertions(+), 5 deletions(-)
>
> diff --git a/mm/Kconfig b/mm/Kconfig
> index 57e1d8c5b505..5941cb34e30d 100644
> --- a/mm/Kconfig
> +++ b/mm/Kconfig
> @@ -230,6 +230,19 @@ config SLOB
>
> endchoice
>
> +config SLUB_TINY
> + bool "Configure SLUB for minimal memory footprint"
> + depends on SLUB && EXPERT
> + select SLAB_MERGE_DEFAULT
> + help
> + Configures the SLUB allocator in a way to achieve minimal memory
> + footprint, sacrificing scalability, debugging and other features.
> + This is intended only for the smallest system that had used the
> + SLOB allocator and is not recommended for systems with more than
> + 16MB RAM.
> +
> + If unsure, say N.
> +
> config SLAB_MERGE_DEFAULT
> bool "Allow slab caches to be merged"
> default y
> @@ -247,7 +260,7 @@ config SLAB_MERGE_DEFAULT
>
> config SLAB_FREELIST_RANDOM
> bool "Randomize slab freelist"
> - depends on SLAB || SLUB
> + depends on SLAB || SLUB && !SLUB_TINY
> help
> Randomizes the freelist order used on creating new pages. This
> security feature reduces the predictability of the kernel slab
> @@ -255,7 +268,7 @@ config SLAB_FREELIST_RANDOM
>
> config SLAB_FREELIST_HARDENED
> bool "Harden slab freelist metadata"
> - depends on SLAB || SLUB
> + depends on SLAB || SLUB && !SLUB_TINY
> help
> Many kernel heap attacks try to target slab cache metadata and
> other infrastructure. This options makes minor performance
> @@ -267,7 +280,7 @@ config SLAB_FREELIST_HARDENED
> config SLUB_STATS
> default n
> bool "Enable SLUB performance statistics"
> - depends on SLUB && SYSFS
> + depends on SLUB && SYSFS && !SLUB_TINY
> help
> SLUB statistics are useful to debug SLUBs allocation behavior in
> order find ways to optimize the allocator. This should never be
> @@ -279,7 +292,7 @@ config SLUB_STATS
>
> config SLUB_CPU_PARTIAL
> default y
> - depends on SLUB && SMP
> + depends on SLUB && SMP && !SLUB_TINY
> bool "SLUB per cpu partial cache"
> help
> Per cpu partial caches accelerate objects allocation and freeing
> diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
> index ce8dded36de9..fca699ad1fb0 100644
> --- a/mm/Kconfig.debug
> +++ b/mm/Kconfig.debug
> @@ -56,7 +56,7 @@ config DEBUG_SLAB
> config SLUB_DEBUG
> default y
> bool "Enable SLUB debugging support" if EXPERT
> - depends on SLUB && SYSFS
> + depends on SLUB && SYSFS && !SLUB_TINY
> select STACKDEPOT if STACKTRACE_SUPPORT
> help
> SLUB has extensive debug support features. Disabling these can
> --
> 2.38.1

Acked-by: Hyeonggon Yoo <42.hyeyoo@xxxxxxxxx>

small comment:
SLAB || (SLUB && !SLUB_TINY) would be easier to interpret than
SLAB || SLUB && !SLUB_TINY

--
Thanks,
Hyeonggon