[PATCH 10/82] locking/atomic/x86: Silence intentional wrapping addition

From: Kees Cook
Date: Mon Jan 22 2024 - 20:15:11 EST


Annotate atomic_add_return() to avoid signed overflow instrumentation.
It is expected to wrap around.

Cc: Will Deacon <will@xxxxxxxxxx>
Cc: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Cc: Boqun Feng <boqun.feng@xxxxxxxxx>
Cc: Mark Rutland <mark.rutland@xxxxxxx>
Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx>
Cc: Ingo Molnar <mingo@xxxxxxxxxx>
Cc: Borislav Petkov <bp@xxxxxxxxx>
Cc: Dave Hansen <dave.hansen@xxxxxxxxxxxxxxx>
Cc: x86@xxxxxxxxxx
Cc: "H. Peter Anvin" <hpa@xxxxxxxxx>
Signed-off-by: Kees Cook <keescook@xxxxxxxxxxxx>
---
arch/x86/include/asm/atomic.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 55a55ec04350..4120cdd87da8 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -80,7 +80,7 @@ static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
}
#define arch_atomic_add_negative arch_atomic_add_negative

-static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
+static __always_inline __signed_wrap int arch_atomic_add_return(int i, atomic_t *v)
{
return i + xadd(&v->counter, i);
}
--
2.34.1