Re: [PATCH 2/6] x86/entry_64: Add VERW just before userspace transition

From: Pawan Gupta
Date: Mon Oct 23 2023 - 17:05:52 EST


On Mon, Oct 23, 2023 at 11:35:21AM -0700, Josh Poimboeuf wrote:
> On Fri, Oct 20, 2023 at 01:45:03PM -0700, Pawan Gupta wrote:
> > @@ -663,6 +665,10 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
> > /* Restore RDI. */
> > popq %rdi
> > swapgs
> > +
> > + /* Mitigate CPU data sampling attacks .e.g. MDS */
> > + USER_CLEAR_CPU_BUFFERS
> > +
> > jmp .Lnative_iret
> >
> >
> > @@ -774,6 +780,9 @@ native_irq_return_ldt:
> > */
> > popq %rax /* Restore user RAX */
> >
> > + /* Mitigate CPU data sampling attacks .e.g. MDS */
> > + USER_CLEAR_CPU_BUFFERS
> > +
>
> Can the above two USER_CLEAR_CPU_BUFFERS be replaced with a single one
> just above native_irq_return_iret? Otherwise the native_irq_return_ldt
> case ends up getting two VERWs.

Wouldn't that make interrupts returning to kernel also execute VERWs?

idtentry_body
error_return
restore_regs_and_return_to_kernel
verw

native_irq_return_ldt doesn't look to be a common case. Anyways, I will
see how to remove the extra VERW.

> > /*
> > * RSP now points to an ordinary IRET frame, except that the page
> > * is read-only and RSP[31:16] are preloaded with the userspace
> > @@ -1502,6 +1511,9 @@ nmi_restore:
> > std
> > movq $0, 5*8(%rsp) /* clear "NMI executing" */
> >
> > + /* Mitigate CPU data sampling attacks .e.g. MDS */
> > + USER_CLEAR_CPU_BUFFERS
> > +
> > /*
> > * iretq reads the "iret" frame and exits the NMI stack in a
> > * single instruction. We are returning to kernel mode, so this
>
> This isn't needed here. This is the NMI return-to-kernel path.

Yes, the VERW here can be omitted. But probably need to check if an NMI
occuring between VERW and ring transition will still execute VERW after
the NMI.