Re: [PATCH v3] pseries: prevent free CPU ids to be reused on another node

From: Laurent Dufour
Date: Wed Apr 07 2021 - 11:01:03 EST


Le 07/04/2021 à 16:55, Nathan Lynch a écrit :
Laurent Dufour <ldufour@xxxxxxxxxxxxx> writes:
Changes since V2, addressing Nathan's comments:
- Remove the retry feature
- Reduce the number of local variables (removing 'i')

I was more interested in not having two variables for NUMA nodes in the
function named 'node' and 'nid', hoping at least one of them could have
a more descriptive name. See below.

static int pseries_add_processor(struct device_node *np)
{
- unsigned int cpu;
+ unsigned int cpu, node;
cpumask_var_t candidate_mask, tmp;
- int err = -ENOSPC, len, nthreads, i;
+ int err = -ENOSPC, len, nthreads, nid;
const __be32 *intserv;
intserv = of_get_property(np, "ibm,ppc-interrupt-server#s", &len);
@@ -163,9 +169,17 @@ static int pseries_add_processor(struct device_node *np)
zalloc_cpumask_var(&candidate_mask, GFP_KERNEL);
zalloc_cpumask_var(&tmp, GFP_KERNEL);
+ /*
+ * Fetch from the DT nodes read by dlpar_configure_connector() the NUMA
+ * node id the added CPU belongs to.
+ */
+ nid = of_node_to_nid(np);
+ if (nid < 0 || !node_possible(nid))
+ nid = first_online_node;
+
nthreads = len / sizeof(u32);
- for (i = 0; i < nthreads; i++)
- cpumask_set_cpu(i, tmp);
+ for (cpu = 0; cpu < nthreads; cpu++)
+ cpumask_set_cpu(cpu, tmp);
cpu_maps_update_begin();
@@ -173,6 +187,19 @@ static int pseries_add_processor(struct device_node *np)
/* Get a bitmap of unoccupied slots. */
cpumask_xor(candidate_mask, cpu_possible_mask, cpu_present_mask);
+
+ /*
+ * Remove free ids previously assigned on the other nodes. We can walk
+ * only online nodes because once a node became online it is not turned
+ * offlined back.
+ */
+ for_each_online_node(node) {
+ if (node == nid) /* Keep our node's recorded ids */
+ continue;
+ cpumask_andnot(candidate_mask, candidate_mask,
+ node_recorded_ids_map[node]);
+ }
+

e.g. change 'nid' to 'assigned_node' or similar, and I think this
becomes easier to follow.

Fair enough, will send a v4

Otherwise the patch looks fine to me now.