Re: [PATCH net-next V4 1/3] sched/topology: Add NUMA-based CPUs spread API

From: Tariq Toukan
Date: Tue Aug 09 2022 - 10:04:18 EST




On 8/9/2022 3:52 PM, Valentin Schneider wrote:
On 09/08/22 13:18, Tariq Toukan wrote:
On 8/9/2022 1:02 PM, Valentin Schneider wrote:

Are there cases where we can't figure this out in advance? From what I grok
out of the two callsites you patched, all vectors will be used unless some
error happens, so compressing the CPUs in a single cpumask seemed
sufficient.


All vectors will be initialized to support the maximum number of traffic
rings. However, the actual number of traffic rings can be controlled and
set to a lower number N_actual < N. In this case, we'll be using only
N_actual instances and we want them to be the first/closest.

Ok, that makes sense, thank you.

In that case I wonder if we'd want a public-facing iterator for
sched_domains_numa_masks[%i][node], rather than copy a portion of
it. Something like the below (naming and implementation haven't been
thought about too much).

const struct cpumask *sched_numa_level_mask(int node, int level)
{
struct cpumask ***masks = rcu_dereference(sched_domains_numa_masks);

if (node >= nr_node_ids || level >= sched_domains_numa_levels)
return NULL;

if (!masks)
return NULL;

return masks[level][node];
}
EXPORT_SYMBOL_GPL(sched_numa_level_mask);


The above can be kept static, and expose only the foo() function below, similar to my sched_cpus_set_spread().

LGTM.
How do you suggest to proceed?
You want to formalize it? Or should I take it from here?


#define for_each_numa_level_mask(node, lvl, mask) \
for (mask = sched_numa_level_mask(node, lvl); mask; \
mask = sched_numa_level_mask(node, ++lvl))

void foo(int node, int cpus[], int ncpus)
{
const struct cpumask *mask;
int lvl = 0;
int i = 0;
int cpu;

rcu_read_lock();
for_each_numa_level_mask(node, lvl, mask) {
for_each_cpu(cpu, mask) {
cpus[i] = cpu;
if (++i == ncpus)
goto done;
}
}
done:
rcu_read_unlock();
}