Re: [PATCH v2] kasan: Improve free meta storage in Generic KASAN

From: Juntong Deng
Date: Wed Nov 22 2023 - 04:31:29 EST


On 2023/11/22 10:27, Andrey Konovalov wrote:
On Tue, Nov 21, 2023 at 10:42 PM Juntong Deng <juntong.deng@xxxxxxxxxxx> wrote:

Currently free meta can only be stored in object if the object is
not smaller than free meta.

After the improvement, even when the object is smaller than free meta,
it is still possible to store part of the free meta in the object,
reducing the increased size of the redzone.

Example:

free meta size: 16 bytes
alloc meta size: 16 bytes
object size: 8 bytes
optimal redzone size (object_size <= 64): 16 bytes

Before improvement:
actual redzone size = alloc meta size + free meta size = 32 bytes

After improvement:
actual redzone size = alloc meta size + (free meta size - object size)
= 24 bytes

Suggested-by: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
Signed-off-by: Juntong Deng <juntong.deng@xxxxxxxxxxx>

I think this change as is does not work well with slub_debug.

slub_debug puts its metadata (redzone, tracks, and orig_size) right
after the object (see calculate_sizes and the comment before
check_pad_bytes). With the current code, KASAN's free meta either fits
within the object or is placed after the slub_debug metadata and
everything works well. With this change, KASAN's free meta tail goes
right past object_size, overlaps with the slub_debug metadata, and
thus can corrupt it.

Thus, to make this patch work properly, we need to carefully think
about all metadatas layout and teach slub_debug that KASAN's free meta
can go past object_size. Possibly, adjusting s->inuse by the size of
KASAN's metas (along with moving kasan_cache_create and fixing up
set_orig_size) would be enough. But I'm not familiar with the
slub_debug code enough to be sure.

If you decide to proceed with improving this change, I've left some
comments for the current code below.

Thank you!



I delved into the memory layout of SLUB_DEBUG today.

I think a better option would be to let the free meta not pass through
the object when SLUB_DEBUG is enabled.

In other words, the free meta continues to be stored according to the
previous method when SLUB_DEBUG is enabled.

Even if we teach SLUB_DEBUG that KASAN's free meta may pass through the
object and move SLUB_DEBUG's metadata backward, it still destroys the
original design intent of SLUB_DEBUG.

Because SLUB_DEBUG checks for out-of-bounds by filling the redzones
on both sides of the object with magic number, if SLUB_DEBUG's redzones
move backward, leaving a gap, that will break the out-of-bounds
checking.

I will send patch V3 to fix this issue.


---
V1 -> V2: Make kasan_metadata_size() adapt to the improved
free meta storage

mm/kasan/generic.c | 50 +++++++++++++++++++++++++++++++---------------
1 file changed, 34 insertions(+), 16 deletions(-)

diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
index 4d837ab83f08..802c738738d7 100644
--- a/mm/kasan/generic.c
+++ b/mm/kasan/generic.c
@@ -361,6 +361,8 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
{
unsigned int ok_size;
unsigned int optimal_size;
+ unsigned int rem_free_meta_size;
+ unsigned int orig_alloc_meta_offset;

if (!kasan_requires_meta())
return;
@@ -394,6 +396,9 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
/* Continue, since free meta might still fit. */
}

+ ok_size = *size;
+ orig_alloc_meta_offset = cache->kasan_info.alloc_meta_offset;
+
/*
* Add free meta into redzone when it's not possible to store
* it in the object. This is the case when:
@@ -401,21 +406,26 @@ void kasan_cache_create(struct kmem_cache *cache, unsigned int *size,
* be touched after it was freed, or
* 2. Object has a constructor, which means it's expected to
* retain its content until the next allocation, or

Please drop "or" on the line above.

- * 3. Object is too small.
* Otherwise cache->kasan_info.free_meta_offset = 0 is implied.
+ * Even if the object is smaller than free meta, it is still
+ * possible to store part of the free meta in the object.
*/
- if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor ||
- cache->object_size < sizeof(struct kasan_free_meta)) {
- ok_size = *size;
-
+ if ((cache->flags & SLAB_TYPESAFE_BY_RCU) || cache->ctor) {
cache->kasan_info.free_meta_offset = *size;
*size += sizeof(struct kasan_free_meta);
+ } else if (cache->object_size < sizeof(struct kasan_free_meta)) {
+ rem_free_meta_size = sizeof(struct kasan_free_meta) -
+ cache->object_size;
+ *size += rem_free_meta_size;
+ if (cache->kasan_info.alloc_meta_offset != 0)
+ cache->kasan_info.alloc_meta_offset += rem_free_meta_size;
+ }

- /* If free meta doesn't fit, don't add it. */
- if (*size > KMALLOC_MAX_SIZE) {
- cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
- *size = ok_size;
- }
+ /* If free meta doesn't fit, don't add it. */
+ if (*size > KMALLOC_MAX_SIZE) {
+ cache->kasan_info.free_meta_offset = KASAN_NO_FREE_META;
+ cache->kasan_info.alloc_meta_offset = orig_alloc_meta_offset;
+ *size = ok_size;
}

/* Calculate size with optimal redzone. */
@@ -464,12 +474,20 @@ size_t kasan_metadata_size(struct kmem_cache *cache, bool in_object)
if (in_object)
return (info->free_meta_offset ?
0 : sizeof(struct kasan_free_meta));

This needs to be changed as well to something like min(cache->object,
sizeof(struct kasan_free_meta)). However, with the slub_debug
conflicts I mentioned above, we might need to change this to something
else.



- else
- return (info->alloc_meta_offset ?
- sizeof(struct kasan_alloc_meta) : 0) +
- ((info->free_meta_offset &&
- info->free_meta_offset != KASAN_NO_FREE_META) ?
- sizeof(struct kasan_free_meta) : 0);
+ else {
+ size_t alloc_meta_size = info->alloc_meta_offset ?
+ sizeof(struct kasan_alloc_meta) : 0;
+ size_t free_meta_size = 0;
+
+ if (info->free_meta_offset != KASAN_NO_FREE_META) {
+ if (info->free_meta_offset)
+ free_meta_size = sizeof(struct kasan_free_meta);
+ else if (cache->object_size < sizeof(struct kasan_free_meta))
+ free_meta_size = sizeof(struct kasan_free_meta) -
+ cache->object_size;
+ }
+ return alloc_meta_size + free_meta_size;
+ }
}

static void __kasan_record_aux_stack(void *addr, bool can_alloc)
--
2.39.2