Re: [PATCH 1/1] KVM: VMX: Cleanup VMX basic information defines and usages

From: Xin Li
Date: Thu Oct 19 2023 - 04:27:06 EST


On 10/18/2023 2:08 PM, Sean Christopherson wrote:

Add IA32_VMX_BASIC MSR bitfield shift macros and use them to define VMX
basic information bitfields.

Why? Unless something actually uses the shift independently, just define the
BIT_ULL(...) straightaway.

Well, reading "BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) |" is hard.

But Lemme remove those shifts not being used now.

Add VMX_BASIC_FEATURES and VMX_BASIC_RESERVED_BITS to form a valid bitmask
of IA32_VMX_BASIC MSR. As a result, to add a new VMX basic feature bit,
just change the 2 new macros in the header file.

Not if a new feature bit lands in the middle of one of the reserved ranges, then
the developer will have to update at least three macros, and add a new one. More
below.

yes, it is the case for VMX nested exception feature bit.



Also replace hardcoded VMX basic numbers with the new VMX basic macros.

Tested-by: Shan Kang <shan.kang@xxxxxxxxx>
Signed-off-by: Xin Li <xin3.li@xxxxxxxxx>
---
arch/x86/include/asm/msr-index.h | 31 ++++++++++++++++++++------
arch/x86/kvm/vmx/nested.c | 10 +++------
arch/x86/kvm/vmx/vmx.c | 2 +-
tools/arch/x86/include/asm/msr-index.h | 31 ++++++++++++++++++++------

Please drop the tools/ update, copying kernel headers into tools is a perf tools
thing that I want no part of.

https://lore.kernel.org/all/Y8bZ%2FJ98V5i3wG%2Fv@xxxxxxxxxx

why can't we simply remove tools/arch/x86/include/asm/msr-index.h?



4 files changed, 52 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 1d111350197f..4607448ff805 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -1084,13 +1084,30 @@
#define MSR_IA32_VMX_PROCBASED_CTLS3 0x00000492
/* VMX_BASIC bits and bitmasks */
-#define VMX_BASIC_VMCS_SIZE_SHIFT 32
-#define VMX_BASIC_TRUE_CTLS (1ULL << 55)
-#define VMX_BASIC_64 0x0001000000000000LLU
-#define VMX_BASIC_MEM_TYPE_SHIFT 50
-#define VMX_BASIC_MEM_TYPE_MASK 0x003c000000000000LLU
-#define VMX_BASIC_MEM_TYPE_WB 6LLU
-#define VMX_BASIC_INOUT 0x0040000000000000LLU
+#define VMX_BASIC_VMCS_SIZE_SHIFT 32
+#define VMX_BASIC_ALWAYS_0 BIT_ULL(31)
+#define VMX_BASIC_RESERVED_RANGE_1 GENMASK_ULL(47, 45)
+#define VMX_BASIC_32BIT_PHYS_ADDR_ONLY_SHIFT 48
+#define VMX_BASIC_32BIT_PHYS_ADDR_ONLY BIT_ULL(VMX_BASIC_32BIT_PHYS_ADDR_ONLY_SHIFT)
+#define VMX_BASIC_DUAL_MONITOR_TREATMENT_SHIFT 49
+#define VMX_BASIC_DUAL_MONITOR_TREATMENT BIT_ULL(VMX_BASIC_DUAL_MONITOR_TREATMENT_SHIFT)
+#define VMX_BASIC_MEM_TYPE_SHIFT 50
+#define VMX_BASIC_MEM_TYPE_WB 6LLU
+#define VMX_BASIC_INOUT_SHIFT 54
+#define VMX_BASIC_INOUT BIT_ULL(VMX_BASIC_INOUT_SHIFT)
+#define VMX_BASIC_TRUE_CTLS_SHIFT 55
+#define VMX_BASIC_TRUE_CTLS BIT_ULL(VMX_BASIC_TRUE_CTLS_SHIFT)
+#define VMX_BASIC_RESERVED_RANGE_2 GENMASK_ULL(63, 56)
+
+#define VMX_BASIC_FEATURES \

Maybe VMX_BASIC_FEATURES_MASK to make it more obvious it's a mask of multiple
bits?

This is better!



+ (VMX_BASIC_DUAL_MONITOR_TREATMENT | \
+ VMX_BASIC_INOUT | \
+ VMX_BASIC_TRUE_CTLS)
+
+#define VMX_BASIC_RESERVED_BITS \
+ (VMX_BASIC_ALWAYS_0 | \
+ VMX_BASIC_RESERVED_RANGE_1 | \
+ VMX_BASIC_RESERVED_RANGE_2)

I don't see any value in defining VMX_BASIC_RESERVED_RANGE_1 and
VMX_BASIC_RESERVED_RANGE_2 separately. Or VMX_BASIC_ALWAYS_0 for the matter.
And I don't think these macros need to go in msr-index.h, e.g. just define them
above vmx_restore_vmx_basic() as that's likely going to be the only user, ever.

hmm, I'm overusing macros, better do:
#define VMX_BASIC_RESERVED_BITS \
(BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56))

Probably should also move VMX MSR field defs from msr-index.h to
a vmx header file.


And what's really missing is a static_assert() or BUILD_BUG_ON() to ensure that
VMX_BASIC_FEATURES doesn't overlap with VMX_BASIC_RESERVED_BITS.

good idea, human beings are not good at such jobs by staring at it.


/* Resctrl MSRs: */
/* - Intel: */
diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
index c5ec0ef51ff7..5280ba944c87 100644
--- a/arch/x86/kvm/vmx/nested.c
+++ b/arch/x86/kvm/vmx/nested.c
@@ -1203,21 +1203,17 @@ static bool is_bitwise_subset(u64 superset, u64 subset, u64 mask)
static int vmx_restore_vmx_basic(struct vcpu_vmx *vmx, u64 data)
{
- const u64 feature_and_reserved =
- /* feature (except bit 48; see below) */
- BIT_ULL(49) | BIT_ULL(54) | BIT_ULL(55) |
- /* reserved */
- BIT_ULL(31) | GENMASK_ULL(47, 45) | GENMASK_ULL(63, 56);
u64 vmx_basic = vmcs_config.nested.basic;
- if (!is_bitwise_subset(vmx_basic, data, feature_and_reserved))
+ if (!is_bitwise_subset(vmx_basic, data,
+ VMX_BASIC_FEATURES | VMX_BASIC_RESERVED_BITS))
return -EINVAL;
/*
* KVM does not emulate a version of VMX that constrains physical
* addresses of VMX structures (e.g. VMCS) to 32-bits.
*/
- if (data & BIT_ULL(48))
+ if (data & VMX_BASIC_32BIT_PHYS_ADDR_ONLY)
return -EINVAL;
if (vmx_basic_vmcs_revision_id(vmx_basic) !=
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 72e3943f3693..f597243d6a72 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2701,7 +2701,7 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
#ifdef CONFIG_X86_64
/* IA-32 SDM Vol 3B: 64-bit CPUs always have VMX_BASIC_MSR[48]==0. */
- if (vmx_msr_high & (1u<<16))
+ if (vmx_msr_high & (1u << (VMX_BASIC_32BIT_PHYS_ADDR_ONLY_SHIFT - 32)))

In all honestly, I find the existing code easier to read. I'm definitely not
saying the existing code is good, but IMO this is at best a wash.

I would much rather we do something like this and move away from the hi/lo crud
entirely:

I ever saw a TDX patch does most of this cleanup.


diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 86ce9efe6c66..f103980c3d02 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2693,28 +2693,28 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
_vmexit_control &= ~x_ctrl;
}
- rdmsr(MSR_IA32_VMX_BASIC, vmx_msr_low, vmx_msr_high);
+ rdmsrl(MSR_IA32_VMX_BASIC, vmx_msr);
/* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */
- if ((vmx_msr_high & 0x1fff) > PAGE_SIZE)
+ if ((VMX_BASIC_VMCS_SIZE(vmx_msr) > PAGE_SIZE)
return -EIO;
#ifdef CONFIG_X86_64
/* IA-32 SDM Vol 3B: 64-bit CPUs always have VMX_BASIC_MSR[48]==0. */
- if (vmx_msr_high & (1u<<16))
+ if (vmx_msr & VMX_BASIC_32BIT_PHYS_ADDR_ONLY)
return -EIO;
#endif
/* Require Write-Back (WB) memory type for VMCS accesses. */
- if (((vmx_msr_high >> 18) & 15) != 6)
+ if (VMX_BASIC_VMCS_MEMTYPE(vmx_msr) != VMX_BASIC_MEM_TYPE_WB)
return -EIO;
rdmsrl(MSR_IA32_VMX_MISC, misc_msr);
- vmcs_conf->size = vmx_msr_high & 0x1fff;
- vmcs_conf->basic_cap = vmx_msr_high & ~0x1fff;
+ vmcs_conf->size = VMX_BASIC_VMCS_SIZE(vmx_msr);
+ vmcs_conf->basic_cap = ????(vmx_msr);
- vmcs_conf->revision_id = vmx_msr_low;
+ vmcs_conf->revision_id = (u32)vmx_msr;
vmcs_conf->pin_based_exec_ctrl = _pin_based_exec_control;
vmcs_conf->cpu_based_exec_ctrl = _cpu_based_exec_control;


--
Thanks!
Xin