Re: [Patch v5 08/16] smt: Create cpu_smt_enabled static key for SMT specific code

From: Tim Chen
Date: Mon Nov 19 2018 - 13:08:57 EST


On 11/19/2018 06:57 AM, Peter Zijlstra wrote:
> On Fri, Nov 16, 2018 at 05:53:51PM -0800, Tim Chen wrote:
>> In later code, STIBP will be turned on/off in the context switch code
>> path when SMT is enabled. Checks for SMT is best
>> avoided on such hot paths.
>>
>> Create cpu_smt_enabled static key to turn on such SMT specific code
>> statically.
>
> AFAICT this patch only follows the SMT control knob but not the actual
> topology state.
>
> And, as I previously wrote, we already have sched_smt_present, which is
> supposed to do much the same.
>
> All you need is the below to make it accurately track the topology.
>
> ---
> Subject: sched/smt: Make sched_smt_present track topology
>
> Currently the sched_smt_present static key is only enabled when we
> encounter SMT topology. However there is demand to also disable the key
> when the topology changes such that there is no SMT present anymore.
>
> Implement this by making the key count the number of cores that have SMT
> enabled.
>
> In particular, the SMT topology bits are set before we enable
> interrrupts and similarly, are cleared after we disable interrupts for
> the last time and die.


Peter & Thomas,

Any objection if I export sched_smt_present after including
Peter's patch and use it in spec_ctrl_update_msr instead.

Something like this?

Tim

diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 943e90d..62fc3af 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -410,8 +410,7 @@ static __always_inline void spec_ctrl_update_msr(unsigned long tifn)
* Need STIBP defense against Spectre v2 attack
* if SMT is in use and enhanced IBRS is unsupported.
*/
- if (static_branch_likely(&cpu_smt_enabled) &&
- !static_cpu_has(X86_FEATURE_USE_IBRS_ENHANCED))
+ if (cpu_smt_present() && !static_cpu_has(X86_FEATURE_USE_IBRS_ENHANCED))
msr |= stibp_tif_to_spec_ctrl(tifn);

wrmsrl(MSR_IA32_SPEC_CTRL, msr);
diff --git a/include/linux/cpu.h b/include/linux/cpu.h
index 3d90155..e3d985e 100644
--- a/include/linux/cpu.h
+++ b/include/linux/cpu.h
@@ -68,6 +68,27 @@ struct device *cpu_device_create(struct device *parent, void *drvdata,
extern ssize_t arch_cpu_release(const char *, size_t);
#endif

+#ifdef CONFIG_SCHED_SMT
+
+extern struct static_key_false sched_smt_present;
+
+static inline bool cpu_smt_present(void)
+{
+ if (static_branch_unlikely(&sched_smt_present))
+ return true;
+ else
+ return false;
+}
+
+#else
+
+static inline bool cpu_smt_present(void)
+{
+ return false;
+}
+
+#endif
+
/*
* These states are not related to the core CPU hotplug mechanism. They are
* used by various (sub)architectures to track internal state
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 618577f..e1e3f09 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -937,8 +937,6 @@ static inline int cpu_of(struct rq *rq)

#ifdef CONFIG_SCHED_SMT

-extern struct static_key_false sched_smt_present;
-
extern void __update_idle_core(struct rq *rq);

static inline void update_idle_core(struct rq *rq)