Re: [merged mm-stable] kasan-add-atomic-tests.patch removed from -mm tree

From: Paul Heidekrüger
Date: Fri Feb 23 2024 - 15:32:54 EST


On 21.02.2024 16:03, Andrew Morton wrote:
>
> The quilt patch titled
> Subject: kasan: add atomic tests
> has been removed from the -mm tree. Its filename was
> kasan-add-atomic-tests.patch
>
> This patch was dropped because it was merged into the mm-stable branch
> of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
>
> ------------------------------------------------------
> From: Paul Heidekr??ger <paul.heidekrueger@xxxxxx>
> Subject: kasan: add atomic tests
> Date: Fri, 2 Feb 2024 11:32:59 +0000
>
> Test that KASan can detect some unsafe atomic accesses.
>
> As discussed in the linked thread below, these tests attempt to cover
> the most common uses of atomics and, therefore, aren't exhaustive.
>
> Link: https://lkml.kernel.org/r/20240202113259.3045705-1-paul.heidekrueger@xxxxxx
> Link: https://lore.kernel.org/all/20240131210041.686657-1-paul.heidekrueger@xxxxxx/T/#u
> Signed-off-by: Paul Heidekr??ger <paul.heidekrueger@xxxxxx>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=214055
> Acked-by: Mark Rutland <mark.rutland@xxxxxxx>
> Cc: Marco Elver <elver@xxxxxxxxxx>
> Cc: Andrey Konovalov <andreyknvl@xxxxxxxxx>
> Cc: Alexander Potapenko <glider@xxxxxxxxxx>
> Cc: Andrey Ryabinin <ryabinin.a.a@xxxxxxxxx>
> Cc: Dmitry Vyukov <dvyukov@xxxxxxxxxx>
> Cc: Vincenzo Frascino <vincenzo.frascino@xxxxxxx>
> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> ---
>
> mm/kasan/kasan_test.c | 79 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 79 insertions(+)
>
> --- a/mm/kasan/kasan_test.c~kasan-add-atomic-tests
> +++ a/mm/kasan/kasan_test.c
> @@ -697,6 +697,84 @@ static void kmalloc_uaf3(struct kunit *t
> KUNIT_EXPECT_KASAN_FAIL(test, ((volatile char *)ptr1)[8]);
> }
>
> +static void kasan_atomics_helper(struct kunit *test, void *unsafe, void *safe)
> +{
> + int *i_unsafe = (int *)unsafe;
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, READ_ONCE(*i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, WRITE_ONCE(*i_unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_load_acquire(i_unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, smp_store_release(i_unsafe, 42));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_dec_if_positive(unsafe));
> +
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_read(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_set(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_and(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_andnot(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_or(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xor(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_xchg(unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_cmpxchg(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(unsafe, safe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_try_cmpxchg(safe, unsafe, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_sub_and_test(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_and_test(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_negative(42, unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_add_unless(unsafe, 21, 42));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_not_zero(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_inc_unless_negative(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_unless_positive(unsafe));
> + KUNIT_EXPECT_KASAN_FAIL(test, atomic_long_dec_if_positive(unsafe));
> +}
> +
> +static void kasan_atomics(struct kunit *test)
> +{
> + void *a1, *a2;
> +
> + /*
> + * Just as with kasan_bitops_tags(), we allocate 48 bytes of memory such
> + * that the following 16 bytes will make up the redzone.
> + */
> + a1 = kzalloc(48, GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> + a2 = kzalloc(sizeof(int), GFP_KERNEL);
> + KUNIT_ASSERT_NOT_ERR_OR_NULL(test, a1);
> +
> + /* Use atomics to access the redzone. */
> + kasan_atomics_helper(test, a1 + 48, a2);
> +
> + kfree(a1);
> + kfree(a2);
> +}
> +
> static void kmalloc_double_kzfree(struct kunit *test)
> {
> char *ptr;
> @@ -1883,6 +1961,7 @@ static struct kunit_case kasan_kunit_tes
> KUNIT_CASE(kasan_strings),
> KUNIT_CASE(kasan_bitops_generic),
> KUNIT_CASE(kasan_bitops_tags),
> + KUNIT_CASE(kasan_atomics),
> KUNIT_CASE(vmalloc_helpers_tags),
> KUNIT_CASE(vmalloc_oob),
> KUNIT_CASE(vmap_tags),
> _
>
> Patches currently in -mm which might be from paul.heidekrueger@xxxxxx are
>
>

Hi Andrew!

There was further discussion around this patch [1], which led to a v3 of the
above patch but might have gotten lost in the wave of emails.

I'm unsure what the protocol is now; do I send you a new patch for the diff
between the above patch and the v3 patch, or can you just use v3 instead of the
above patch?

I hope this doesn't cause too much trouble.

Many thanks,
Paul

[1]:
https://lore.kernel.org/all/20240212083342.3075850-1-paul.heidekrueger@xxxxxx/