Re: [PATCH v5 1/9] x86/split_lock: Rework the initialization flow of split lock detection

From: Xiaoyao Li
Date: Mon Mar 23 2020 - 21:11:02 EST


On 3/24/2020 4:24 AM, Thomas Gleixner wrote:
Thomas Gleixner <tglx@xxxxxxxxxxxxx> writes:
Xiaoyao Li <xiaoyao.li@xxxxxxxxx> writes:

Current initialization flow of split lock detection has following issues:
1. It assumes the initial value of MSR_TEST_CTRL.SPLIT_LOCK_DETECT to be
zero. However, it's possible that BIOS/firmware has set it.

Ok.

2. X86_FEATURE_SPLIT_LOCK_DETECT flag is unconditionally set even if
there is a virtualization flaw that FMS indicates the existence while
it's actually not supported.

3. Because of #2, KVM cannot rely on X86_FEATURE_SPLIT_LOCK_DETECT flag
to check verify if feature does exist, so cannot expose it to
guest.

Sorry this does not make anny sense. KVM is the hypervisor, so it better
can rely on the detect flag. Unless you talk about nested virt and a
broken L1 hypervisor.

To solve these issues, introducing a new sld_state, "sld_not_exist",
as

The usual naming convention is sld_not_supported.

But this extra state is not needed at all, it already exists:

X86_FEATURE_SPLIT_LOCK_DETECT

You just need to make split_lock_setup() a bit smarter. Soemthing like
the below. It just wants to be split into separate patches.

Thanks,

tglx
---
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -45,6 +45,7 @@ enum split_lock_detect_state {
* split lock detect, unless there is a command line override.
*/
static enum split_lock_detect_state sld_state = sld_off;
+static DEFINE_PER_CPU(u64, msr_test_ctrl_cache);

I used percpu cache in v3, but people prefer Tony's cache for reserved bits[1].

If you prefer percpu cache, I'll use it in next version.

[1]: https://lore.kernel.org/lkml/20200303192242.GU1439@xxxxxxxxxxxxxxx/

/*
* Processors which have self-snooping capability can handle conflicting
@@ -984,11 +985,32 @@ static inline bool match_option(const ch
return len == arglen && !strncmp(arg, opt, len);
}
+static bool __init split_lock_verify_msr(bool on)
+{
+ u64 ctrl, tmp;
+
+ if (rdmsrl_safe(MSR_TEST_CTRL, &ctrl))
+ return false;
+ if (on)
+ ctrl |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ else
+ ctrl &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
+ if (wrmsrl_safe(MSR_TEST_CTRL, ctrl))
+ return false;
+ rdmsrl(MSR_TEST_CTRL, tmp);
+ return ctrl == tmp;
+}
+
static void __init split_lock_setup(void)
{
char arg[20];
int i, ret;
+ if (!split_lock_verify_msr(true) || !split_lock_verify_msr(false)) {
+ pr_info("MSR access failed: Disabled\n");
+ return;
+ }
+

I did similar thing like this in my v3, however Sean raised concern that toggling MSR bit before parsing kernel param is bad behavior. [2]

[2]: https://lore.kernel.org/kvm/20200305162311.GG11500@xxxxxxxxxxxxxxx/