Re: [PATCH v3 4/8] slab: Add __alloc_size attributes for better bounds checking

From: Kees Cook
Date: Tue Oct 05 2021 - 23:06:34 EST


On Tue, Oct 05, 2021 at 06:47:17PM -0700, Andrew Morton wrote:
> On Thu, 30 Sep 2021 15:27:00 -0700 Kees Cook <keescook@xxxxxxxxxxxx> wrote:
>
> > As already done in GrapheneOS, add the __alloc_size attribute for regular
> > kmalloc interfaces, to provide additional hinting for better bounds
> > checking, assisting CONFIG_FORTIFY_SOURCE and other compiler
> > optimizations.
>
> x86_64 allmodconfig:

What compiler and version?

>
> In file included from ./arch/x86/include/asm/preempt.h:7,
> from ./include/linux/preempt.h:78,
> from ./include/linux/spinlock.h:55,
> from ./include/linux/mmzone.h:8,
> from ./include/linux/gfp.h:6,
> from ./include/linux/mm.h:10,
> from ./include/linux/mman.h:5,
> from lib/test_kasan_module.c:10:
> In function 'check_copy_size',
> inlined from 'copy_user_test' at ./include/linux/uaccess.h:191:6:
> ./include/linux/thread_info.h:213:4: error: call to '__bad_copy_to' declared with attribute error: copy destination size is too small
> 213 | __bad_copy_to();
> | ^~~~~~~~~~~~~~~
> In function 'check_copy_size',
> inlined from 'copy_user_test' at ./include/linux/uaccess.h:199:6:
> ./include/linux/thread_info.h:211:4: error: call to '__bad_copy_from' declared with attribute error: copy source size is too small
> 211 | __bad_copy_from();
> | ^~~~~~~~~~~~~~~~~
> make[1]: *** [lib/test_kasan_module.o] Error 1
> make: *** [lib] Error 2

Hah, yes, it caught an intentionally bad copy. This may bypass the
check, as I've had to do in LKDTM before. I will test...

diff --git a/lib/test_kasan_module.c b/lib/test_kasan_module.c
index 7ebf433edef3..9fb2fb2937da 100644
--- a/lib/test_kasan_module.c
+++ b/lib/test_kasan_module.c
@@ -19,7 +19,12 @@ static noinline void __init copy_user_test(void)
{
char *kmem;
char __user *usermem;
- size_t size = 128 - KASAN_GRANULE_SIZE;
+ /*
+ * This is marked volatile to avoid __alloc_size()
+ * noticing the intentionally out-of-bounds copys
+ * being done on the allocation.
+ */
+ volatile size_t size = 128 - KASAN_GRANULE_SIZE;
int __maybe_unused unused;

kmem = kmalloc(size, GFP_KERNEL);

--
Kees Cook