Re: [PATCH] x86/mce: Reduce number of machine checks taken during recovery

From: Peter Zijlstra
Date: Fri Dec 17 2021 - 19:53:53 EST


On Wed, Dec 15, 2021 at 02:20:16PM -0800, Tony Luck wrote:
> ---
> arch/x86/lib/copy_user_64.S | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
> index 2797e630b9b1..8c53be99faa0 100644
> --- a/arch/x86/lib/copy_user_64.S
> +++ b/arch/x86/lib/copy_user_64.S
> @@ -233,12 +233,19 @@ EXPORT_SYMBOL(copy_user_enhanced_fast_string)
> * eax uncopied bytes or 0 if successful.
> */
> SYM_CODE_START_LOCAL(.Lcopy_user_handle_tail)
> + cmp $X86_TRAP_MC,%eax
> + je 3f
> movl %edx,%ecx
> 1: rep movsb
> 2: mov %ecx,%eax
> ASM_CLAC
> ret
>
> +3:
> + movl %edx,%eax
> + ASM_CLAC
> + ret
> +
> _ASM_EXTABLE_CPY(1b, 2b)
> SYM_CODE_END(.Lcopy_user_handle_tail)

This patch no longer applies; however, you forgot to change the comment
on top about the calling convention, because now the function expects
rax to contain the trap number.

It's also not obvious from the massive rambling on top that all
callsites were audited to make sure this is in fact true (it appears
so).