Re: [PATCH 1/2] x86/cfi: Fix ret_from_fork indirect calls

From: Peter Zijlstra
Date: Wed Jun 21 2023 - 05:30:49 EST


On Wed, Jun 21, 2023 at 10:52:17AM +0200, Peter Zijlstra wrote:
> On Tue, Jun 20, 2023 at 02:56:22PM -0700, Kees Cook wrote:
> > On Thu, Jun 15, 2023 at 09:35:47PM +0200, Peter Zijlstra wrote:
> > > The ret_from_fork stub does an indirect call to the kthread function,
> > > but only knows about Retpolines. Instead of making the asm more
> > > complicated, punt to C and let the compiler figure it out.
> > >
> > > Specifically, this makes it a proper kCFI indirect call when needed (in
> > > fact, it is nearly impossible to code a kCFI indirect call in asm).
> > >
> > > This was the only callsite that was still calling func()+0 on regular
> > > indirect functions.
> > >
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> >
> > I worry this creates a calling gadget, but I don't think it really
> > counts since it's just converting between two prototypes. Regardless:
>
> Ah, since this will never be indirectly called, I should be able to
> annotate this so it never can be. Let me see what I can get the compiler
> to do.

I can't seem to manage to have it clobber it's __cfi hash value. Ideally
we'd have an attribute to force the thing to 0 or something.

Best I can do is add __noendbr, which will inhibit the ENDBR.

Alternatively, I *can* write the thing in asm by hard-coding the hash
value, but that's not nice:

mov %rbx,%r11
mov %r12,%rdi
#ifdef CONFIG_CFI_CLANG
mov $0x76049ec3,%r10d
add -0xf(%r11),%r10d
je 1f
ud2
1:
#endif
CALL_NOSPEC r11

should work.. but if ever that hash function changes we're in trouble.

---
Subject: x86/cfi: Fix ret_from_fork() indirect calls
From: Peter Zijlstra <peterz@xxxxxxxxxxxxx>
Date: Thu, 15 Jun 2023 21:35:47 +0200

The ret_from_fork() stub does an indirect call to the kthread function,
but only knows about Retpolines. Instead of making the asm more
complicated, punt to C and let the compiler figure it out.

Specifically, this makes it a proper kCFI indirect call when needed (in
fact, it is nearly impossible to code a kCFI indirect call in asm).

This was the only callsite that was still calling func()+0 on regular
indirect functions.

Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
Reviewed-by: Sami Tolvanen <samitolvanen@xxxxxxxxxx>
Link: https://lore.kernel.org/r/20230615193722.127844423@xxxxxxxxxxxxx
---
arch/x86/entry/entry_64.S | 6 ++++--
arch/x86/include/asm/switch_to.h | 2 ++
arch/x86/kernel/process_64.c | 5 +++++
3 files changed, 11 insertions(+), 2 deletions(-)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -304,8 +304,10 @@ SYM_CODE_START_NOALIGN(ret_from_fork)
1:
/* kernel thread */
UNWIND_HINT_END_OF_STACK
- movq %r12, %rdi
- CALL_NOSPEC rbx
+ movq %rbx, %rdi
+ movq %r12, %rsi
+ call kthread_from_fork
+
/*
* A kernel thread is allowed to return here after successfully
* calling kernel_execve(). Exit to userspace to complete the execve()
--- a/arch/x86/include/asm/switch_to.h
+++ b/arch/x86/include/asm/switch_to.h
@@ -74,6 +74,8 @@ static inline void update_task_stack(str
#endif
}

+extern __noendbr void kthread_from_fork(int (*fn)(void *), void *arg);
+
static inline void kthread_frame_init(struct inactive_task_frame *frame,
int (*fun)(void *), void *arg)
{
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -544,6 +544,11 @@ void compat_start_thread(struct pt_regs
}
#endif

+__visible __noendbr void kthread_from_fork(int (*fn)(void *), void *arg)
+{
+ fn(arg);
+}
+
/*
* switch_to(x,y) should switch tasks from x to y.
*