Re: [PATCH] arm: Fix topology setup in case of CPU hotplug for CONFIG_SCHED_MC

From: Lukasz Luba
Date: Wed Nov 20 2019 - 06:16:37 EST


Hi Dietmar,

On 11/20/19 10:42 AM, Dietmar Eggemann wrote:
Commit ca74b316df96 ("arm: Use common cpu_topology structure and
functions.") changed cpu_coregroup_mask() from the ARM32 specific
implementation in arch/arm/include/asm/topology.h to the one shared
with ARM64 and RISCV in drivers/base/arch_topology.c.

Currently on Arm32 (TC2 w/ CONFIG_SCHED_MC) the task scheduler setup
code (w/ CONFIG_SCHED_DEBUG) shows this during CPU hotplug:

ERROR: groups don't span domain->span

It happens to CPUs of the cluster of the CPU which gets hot-plugged
out on scheduler domain MC.

Turns out that the shared cpu_coregroup_mask() requires that the
hot-plugged CPU is removed from the core_sibling mask via
remove_cpu_topology(). Otherwise the 'is core_sibling subset of
cpumask_of_node()' doesn't work. In this case the task scheduler has to
deal with cpumask_of_node instead of core_sibling which is wrong on
scheduler domain MC.

e.g. CPU3 hot-plugged out on TC2 [cluster0: 0,3-4 cluster1: 1-2]:

cpu_coregroup_mask(): CPU3 cpumask_of_node=0-2,4 core_sibling=0,3-4
^
should be:

cpu_coregroup_mask(): CPU3 cpumask_of_node=0-2,4 core_sibling=0,4

Add remove_cpu_topology() to __cpu_disable() to remove the CPU from the
topology masks in case of a CPU hotplug out operation.

At the same time tweak store_cpu_topology() slightly so it will call
update_siblings_masks() in case of CPU hotplug in operation via
secondary_start_kernel()->smp_store_cpu_info().

This aligns the Arm32 implementation with the Arm64 one.

Looks good to me.

Tested-and-Reviewed-by: Lukasz Luba <lukasz.luba@xxxxxxx>


Fixes: ca74b316df96 ("arm: Use common cpu_topology structure and functions")
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@xxxxxxx>
---
arch/arm/kernel/smp.c | 2 ++
arch/arm/kernel/topology.c | 15 +++++++--------
2 files changed, 9 insertions(+), 8 deletions(-)

diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 4b0bab2607e4..139c0d98fa29 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -240,6 +240,8 @@ int __cpu_disable(void)
if (ret)
return ret;
+ remove_cpu_topology(cpu);
+
/*
* Take this CPU offline. Once we clear this, we can't return,
* and we must not schedule until we're ready to give up the cpu.
diff --git a/arch/arm/kernel/topology.c b/arch/arm/kernel/topology.c
index 5b9faba03afb..b37b0a340991 100644
--- a/arch/arm/kernel/topology.c
+++ b/arch/arm/kernel/topology.c
@@ -196,9 +196,8 @@ void store_cpu_topology(unsigned int cpuid)
struct cpu_topology *cpuid_topo = &cpu_topology[cpuid];
unsigned int mpidr;
- /* If the cpu topology has been already set, just return */
- if (cpuid_topo->core_id != -1)
- return;
+ if (cpuid_topo->package_id != -1)
+ goto topology_populated;
mpidr = read_cpuid_mpidr();
@@ -231,14 +230,14 @@ void store_cpu_topology(unsigned int cpuid)
cpuid_topo->package_id = -1;
}
- update_siblings_masks(cpuid);
-
update_cpu_capacity(cpuid);
pr_info("CPU%u: thread %d, cpu %d, socket %d, mpidr %x\n",
- cpuid, cpu_topology[cpuid].thread_id,
- cpu_topology[cpuid].core_id,
- cpu_topology[cpuid].package_id, mpidr);
+ cpuid, cpuid_topo->thread_id, cpuid_topo->core_id,
+ cpuid_topo->package_id, mpidr);
+
+topology_populated:
+ update_siblings_masks(cpuid);
}
static inline int cpu_corepower_flags(void)