Re: [PATCH] mm/page_alloc: silence a KASAN false positive

From: Qian Cai
Date: Wed Jun 10 2020 - 08:26:10 EST


On Wed, Jun 10, 2020 at 07:54:50AM +0200, Dmitry Vyukov wrote:
> On Wed, Jun 10, 2020 at 7:22 AM Qian Cai <cai@xxxxxx> wrote:
> >
> > kernel_init_free_pages() will use memset() on s390 to clear all pages
> > from kmalloc_order() which will override KASAN redzones because a
> > redzone was setup from the end of the allocation size to the end of the
> > last page. Silence it by not reporting it there. An example of the
> > report is,
>
> Interesting. The reason why we did not hit it on x86_64 is because
> clear_page is implemented in asm (arch/x86/lib/clear_page_64.S) and
> thus is not instrumented. Arm64 probably does the same. However, on
> s390 clear_page is defined to memset.
> clear_[high]page are pretty extensively used in the kernel.
> We can either do this, or make clear_page non instrumented on s390 as
> well to match the existing implicit assumption. The benefit of the
> current approach is that we can find some real use-after-free's and
> maybe out-of-bounds on clear_page. The downside is that we may need
> more of these annotations. Thoughts?

Since we had already done the same thing in poison_page(), I suppose we
could do the same here. Also, clear_page() has been used in many places
on s390, and it is not clear to me if those are all safe like this.

There might be more annotations required, so it probably up to s390
maintainers (CC'ed) if they prefer not instrumenting clear_page() like
other arches.

>
> > BUG: KASAN: slab-out-of-bounds in __free_pages_ok
> > Write of size 4096 at addr 000000014beaa000
> > Call Trace:
> > show_stack+0x152/0x210
> > dump_stack+0x1f8/0x248
> > print_address_description.isra.13+0x5e/0x4d0
> > kasan_report+0x130/0x178
> > check_memory_region+0x190/0x218
> > memset+0x34/0x60
> > __free_pages_ok+0x894/0x12f0
> > kfree+0x4f2/0x5e0
> > unpack_to_rootfs+0x60e/0x650
> > populate_rootfs+0x56/0x358
> > do_one_initcall+0x1f4/0xa20
> > kernel_init_freeable+0x758/0x7e8
> > kernel_init+0x1c/0x170
> > ret_from_fork+0x24/0x28
> > Memory state around the buggy address:
> > 000000014bea9f00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > 000000014bea9f80: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> > >000000014beaa000: 03 fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> > ^
> > 000000014beaa080: fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> > 000000014beaa100: fe fe fe fe fe fe fe fe fe fe fe fe fe fe
> >
> > Fixes: 6471384af2a6 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
> > Signed-off-by: Qian Cai <cai@xxxxxx>
> > ---
> > mm/page_alloc.c | 3 +++
> > 1 file changed, 3 insertions(+)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 727751219003..9954973f89a3 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1164,8 +1164,11 @@ static void kernel_init_free_pages(struct page *page, int numpages)
> > {
> > int i;
> >
> > + /* s390's use of memset() could override KASAN redzones. */
> > + kasan_disable_current();
> > for (i = 0; i < numpages; i++)
> > clear_highpage(page + i);
> > + kasan_enable_current();
> > }
> >
> > static __always_inline bool free_pages_prepare(struct page *page,
> > --
> > 2.21.0 (Apple Git-122.2)
> >