Re: [PATCH 4/8] kvm: vmx: Set IBPB when running a different VCPU

From: Paolo Bonzini
Date: Mon Jan 15 2018 - 04:21:25 EST


On 13/01/2018 10:29, Woodhouse, David wrote:
> On Fri, 2018-01-12 at 09:03 -0800, Jim Mattson wrote:
>> The point behind the IPBP in vmx_vcpu_load is to prevent one VCPU from
>> steering the speculative execution of the next. If the VMCS address is
>> recycled, vmx_vcpu_load doesn't realize that the VCPUs are different,
>> and so it won't issue the IPBP.
>
> I don't understand the sequence of events that could lead to this.
>
> If the VMCS is freed, surely per_cpu(current_vmcs, cpu) has to be
> cleared? If the VMCS is freed while it's still *active* on a CPU,
> that's a bug, surely? And if that CPU is later offlined and clears the
> VMCS, it's going to scribble on freed (and potentially re-used) memory.

You're right, svm.c needs it but vmx.c does not (because AMD has no
vmclear equivalent).

>>> +ÂÂÂÂÂÂÂif (have_spec_ctrl)
>>> +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂwrmsrl(MSR_IA32_PRED_CMD, PRED_CMD_IBPB);
>
> Also, I think the same condition applies to the conditional branches
> over the IBPB-frobbing, as it does to setting IBRS. You can eschew the
> 'else lfence' only if you put in a comment showing that you've proved
> it's safe. Many of the other bits like this are being done with
> alternatives, which avoids that concern completely.
>
> But really, I don't like this series much. Don't say "let's do this
> until upstream supports...". Just fix it up properly, and add the
> generic X86_FEATURE_IBPB bit and use it. We have *too* many separate
> tiny patch sets, and we need to be getting our act together and putting
> it all in one.

I agree, this series is not meant to be committed. I only posted to get
early reviews (and it was very effective for that purpose).

Paolo