Re: [RFC][PATCH 02/17] x86/cpu: Clean up SRSO return thunk mess

From: Nikolay Borisov
Date: Wed Aug 09 2023 - 11:46:12 EST




On 9.08.23 г. 10:12 ч., Peter Zijlstra wrote:
Use the existing configurable return thunk. There is absolute no
justification for having created this __x86_return_thunk alternative.

To clarify, the whole thing looks like:

Zen3/4 does:

srso_alias_untrain_ret:
nop2
lfence
jmp srso_alias_return_thunk
int3

srso_alias_safe_ret: // aliasses srso_alias_untrain_ret just so
add $8, %rsp
ret
int3

srso_alias_return_thunk:
call srso_alias_safe_ret
ud2

While Zen1/2 does:

srso_untrain_ret:
movabs $foo, %rax
lfence
call srso_safe_ret (jmp srso_return_thunk ?)
int3

srso_safe_ret: // embedded in movabs immediate
add $8,%rsp
ret
int3

srso_return_thunk:
call srso_safe_ret
ud2

While retbleed does:

zen_untrain_ret:
test $0xcc, %bl
lfence
jmp zen_return_thunk
int3

zen_return_thunk: // embedded in the test instruction
ret
int3

Where Zen1/2 flush the BTB using the instruction decoder trick
(test,movabs) Zen3/4 use instruction aliasing. SRSO adds RSB (RAP in
AMD speak) stuffing to force a return mis-predict.

That is; the AMD retbleed is a form of Speculative-Type-Confusion
where the branch predictor is trained to use the BTB to predict the
RET address, while AMD inception/SRSO is a form of
Speculative-Type-Confusion where another instruction is trained to be
treated like a CALL instruction and poison the RSB (RAP).

Pick one of three options at boot.



So this boils down to simply removing one level of indirection, instead of patching the body of __x86_return_thunk you directly patch the return sites with the correct thunk.


Reviewed-by: Nikolay Borisov <nik.borisov@xxxxxxxx>