[PATCH] powerpc/32: Clear volatile regs on syscall exit

From: Christophe Leroy
Date: Wed Feb 23 2022 - 12:11:58 EST


Commit a82adfd5c7cb ("hardening: Introduce CONFIG_ZERO_CALL_USED_REGS")
added zeroing of used registers at function exit.

At the time being, PPC64 clears volatile registers on syscall exit but
PPC32 doesn't do it for performance reason.

Add that clearing in PPC32 syscall exit as well, but only when
CONFIG_ZERO_CALL_USED_REGS is selected.

On an 8xx, the null_syscall selftest gives:
- Without CONFIG_ZERO_CALL_USED_REGS : 288 cycles
- With CONFIG_ZERO_CALL_USED_REGS : 305 cycles
- With CONFIG_ZERO_CALL_USED_REGS + this patch : 319 cycles

Note that (independent of this patch), with pmac32_defconfig,
vmlinux size is as follows with/without CONFIG_ZERO_CALL_USED_REGS:

text data bss dec hex filename
9578869 2525210 194400 12298479 bba8ef vmlinux.without
10318045 2525210 194400 13037655 c6f057 vmlinux.with

That is a 7.7% increase on text size, 6.0% on overall size.

Signed-off-by: Christophe Leroy <christophe.leroy@xxxxxxxxxx>
---
arch/powerpc/kernel/entry_32.S | 15 +++++++++++++++
1 file changed, 15 insertions(+)

diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index 7748c278d13c..199f23092c02 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -151,6 +151,21 @@ syscall_exit_finish:
bne 3f
mtcr r5

+#ifdef CONFIG_ZERO_CALL_USED_REGS
+ /* Zero volatile regs that may contain sensitive kernel data */
+ li r0,0
+ li r4,0
+ li r5,0
+ li r6,0
+ li r7,0
+ li r8,0
+ li r9,0
+ li r10,0
+ li r11,0
+ li r12,0
+ mtctr r0
+ mtxer r0
+#endif
1: lwz r2,GPR2(r1)
lwz r1,GPR1(r1)
rfi
--
2.34.1