Re: [PATCH v2 0/9] slab: Introduce dedicated bucket allocator

From: Vlastimil Babka
Date: Mon Mar 25 2024 - 09:50:58 EST


On 3/5/24 11:10 AM, Kees Cook wrote:
> Hi,
>
> Repeating the commit logs for patch 4 here:
>
> Dedicated caches are available For fixed size allocations via
> kmem_cache_alloc(), but for dynamically sized allocations there is only
> the global kmalloc API's set of buckets available. This means it isn't
> possible to separate specific sets of dynamically sized allocations into
> a separate collection of caches.
>
> This leads to a use-after-free exploitation weakness in the Linux
> kernel since many heap memory spraying/grooming attacks depend on using
> userspace-controllable dynamically sized allocations to collide with
> fixed size allocations that end up in same cache.
>
> While CONFIG_RANDOM_KMALLOC_CACHES provides a probabilistic defense
> against these kinds of "type confusion" attacks, including for fixed
> same-size heap objects, we can create a complementary deterministic
> defense for dynamically sized allocations.
>
> In order to isolate user-controllable sized allocations from system
> allocations, introduce kmem_buckets_create(), which behaves like
> kmem_cache_create(). (The next patch will introduce kmem_buckets_alloc(),
> which behaves like kmem_cache_alloc().)
>
> Allows for confining allocations to a dedicated set of sized caches
> (which have the same layout as the kmalloc caches).
>
> This can also be used in the future once codetag allocation annotations
> exist to implement per-caller allocation cache isolation[0] even for
> dynamic allocations.
>
> Link: https://lore.kernel.org/lkml/202402211449.401382D2AF@keescook [0]
>
> After the implemetation are 2 example patches of how this could be used
> for some repeat "offenders" that get used in exploits. There are more to
> be isolated beyond just these. Repeating the commit log for patch 8 here:
>
> The msg subsystem is a common target for exploiting[1][2][3][4][5][6]
> use-after-free type confusion flaws in the kernel for both read and
> write primitives. Avoid having a user-controlled size cache share the
> global kmalloc allocator by using a separate set of kmalloc buckets.
>
> Link: https://blog.hacktivesecurity.com/index.php/2022/06/13/linux-kernel-exploit-development-1day-case-study/ [1]
> Link: https://hardenedvault.net/blog/2022-11-13-msg_msg-recon-mitigation-ved/ [2]
> Link: https://www.willsroot.io/2021/08/corctf-2021-fire-of-salvation-writeup.html [3]
> Link: https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html [4]
> Link: https://google.github.io/security-research/pocs/linux/cve-2021-22555/writeup.html [5]
> Link: https://zplin.me/papers/ELOISE.pdf [6]

Hi Kees,

after reading [1] I think the points should be addressed, mainly about the
feasibility of converting users manually. On a related technical note I
worry what will become of /proc/slabinfo when we convert non-trivial amounts
of users.

Also would interested to hear Jann Horn et al.'s opinion, and whether the
SLAB_VIRTUAL effort will continue?

Thanks,
Vlastimil


[1]
https://dustri.org/b/notes-on-the-slab-introduce-dedicated-bucket-allocator-series.html

> -Kees
>
> v2: significant rewrite, generalized the buckets type, added kvmalloc style
> v1: https://lore.kernel.org/lkml/20240304184252.work.496-kees@xxxxxxxxxx/
>
> Kees Cook (9):
> slab: Introduce kmem_buckets typedef
> slub: Plumb kmem_buckets into __do_kmalloc_node()
> util: Introduce __kvmalloc_node() that can take kmem_buckets argument
> slab: Introduce kmem_buckets_create()
> slab: Introduce kmem_buckets_alloc()
> slub: Introduce kmem_buckets_alloc_track_caller()
> slab: Introduce kmem_buckets_valloc()
> ipc, msg: Use dedicated slab buckets for alloc_msg()
> mm/util: Use dedicated slab buckets for memdup_user()
>
> include/linux/slab.h | 50 +++++++++++++++++++++-------
> ipc/msgutil.c | 13 +++++++-
> lib/fortify_kunit.c | 2 +-
> mm/slab.h | 6 ++--
> mm/slab_common.c | 77 ++++++++++++++++++++++++++++++++++++++++++--
> mm/slub.c | 14 ++++----
> mm/util.c | 23 +++++++++----
> 7 files changed, 154 insertions(+), 31 deletions(-)
>