Re: [PATCH v4 09/20] KVM:x86: Add common code of CET MSR access

From: Chao Gao
Date: Thu Jul 27 2023 - 04:14:36 EST


>> - case MSR_KVM_GUEST_SSP:
>> - case MSR_IA32_PL0_SSP ... MSR_IA32_INT_SSP_TAB:
>> case MSR_IA32_PL0_SSP ... MSR_IA32_PL3_SSP:
>> if (!kvm_cet_is_msr_accessible(vcpu, msr_info))
>> return 1;
>> if (is_noncanonical_address(data, vcpu))
>> return 1;
>> if (!IS_ALIGNED(data, 4))
>> return 1;
>> if (msr == MSR_IA32_PL0_SSP || msr == MSR_IA32_PL1_SSP ||
>> msr == MSR_IA32_PL2_SSP) {
>> vcpu->arch.cet_s_ssp[msr - MSR_IA32_PL0_SSP] = data;
>> } else if (msr == MSR_IA32_PL3_SSP) {
>> kvm_set_xsave_msr(msr_info);
>> }
>> break;
>>
>>
>>
>> BTW, shouldn't bit2:0 of MSR_KVM_GUEST_SSP be 0? i.e., for MSR_KVM_GUEST_SSP,
>> the alignment check should be IS_ALIGNED(data, 8).
>
>The check for GUEST_SSP should be consistent with that of PLx_SSPs, otherwise
>there would be issues

OK. I had the question because Gil said in a previous email:

IDT event delivery, when changing to rings 0-2 will load SSP from the
MSR corresponding to the new ring. These transitions check that bits
2:0 of the new value are all zero and will generate a nested fault if
any of those bits are set. (Far CALL using a call gate also checks this
if changing CPL.)

it sounds to me, at least for CPL0-2, SSP (or the synethic
MSR_KVM_GUEST_SSP) should be 8-byte aligned. Otherwise, there will be a
nested fault when trying to load SSP.

I might be overly cautious. No objection to do IS_ALIGNED(data, 4) for SSP.