Re: [PATCH 3/5] x86/boot/compressed/64: Check SEV encryption in 64-bit boot-path

From: Arvind Sankar
Date: Mon Oct 19 2020 - 13:00:16 EST


On Mon, Oct 19, 2020 at 05:11:19PM +0200, Joerg Roedel wrote:
> From: Joerg Roedel <jroedel@xxxxxxx>
>
> Check whether the hypervisor reported the correct C-bit when running as
> an SEV guest. Using a wrong C-bit position could be used to leak
> sensitive data from the guest to the hypervisor.
>
> The check function is in arch/x86/kernel/sev_verify_cbit.S so that it
> can be re-used in the running kernel image.
>
> Signed-off-by: Joerg Roedel <jroedel@xxxxxxx>
> ---

> +
> + /* Store value to memory and keep it in %r10 */
> + movq %r10, sev_check_data(%rip)
> +

Does there need to be a cache flush/invalidation between this and the
read below to avoid just reading back from cache, or will the hardware
take care of that?

> + /* Backup current %cr3 value to restore it later */
> + movq %cr3, %r11
> +
> + /* Switch to new %cr3 - This might unmap the stack */
> + movq %rdi, %cr3

Does there need to be a TLB flush after this? When executed from the
main kernel's head code, CR4.PGE is enabled, and if the original page
mapping had the global bit set (the decompressor stub sets that in the
identity mapping), won't the read below still use the original encrypted
mapping if we don't explicitly flush it?

> +
> + /*
> + * Compare value in %r10 with memory location - If C-Bit is incorrect
> + * this would read the encrypted data and make the check fail.
> + */
> + cmpq %r10, sev_check_data(%rip)
> +
> + /* Restore old %cr3 */
> + movq %r11, %cr3
> +
> + /* Check CMPQ result */
> + je 3f
> +
> + /*
> + * The check failed - Prevent any forward progress to prevent ROP
> + * attacks, invalidate the stack and go into a hlt loop.
> + */
> + xorq %rsp, %rsp
> + subq $0x1000, %rsp
> +2: hlt
> + jmp 2b
> +3:
> +#endif
> + ret
> +SYM_FUNC_END(sev_verify_cbit)
> +
> --
> 2.28.0
>