Re: [PATCH 3/8] sched/topology: add for_each_numa_cpu() macro

From: Yury Norov
Date: Mon Apr 10 2023 - 14:05:35 EST


On Sat, Mar 25, 2023 at 11:55:09AM -0700, Yury Norov wrote:
> for_each_cpu() is widely used in the kernel, and it's beneficial to
> create a NUMA-aware version of the macro.
>
> Recently added for_each_numa_hop_mask() works, but switching existing
> codebase to it is not an easy process.
>
> New for_each_numa_cpu() is designed to be similar to the for_each_cpu().
> It allows to convert existing code to NUMA-aware as simple as adding a
> hop iterator variable and passing it inside new macro. for_each_numa_cpu()
> takes care of the rest.
>
> At the moment, we have 2 users of NUMA-aware enumerators. One is
> Melanox's in-tree driver, and another is Intel's in-review driver:
>
> https://lore.kernel.org/lkml/20230216145455.661709-1-pawel.chmielewski@xxxxxxxxx/
>
> Both real-life examples follow the same pattern:
>
> for_each_numa_hop_mask(cpus, prev, node) {
> for_each_cpu_andnot(cpu, cpus, prev) {
> if (cnt++ == max_num)
> goto out;
> do_something(cpu);
> }
> prev = cpus;
> }
>
> With the new macro, it would look like this:
>
> for_each_numa_cpu(cpu, hop, node, cpu_possible_mask) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> Straight conversion of existing for_each_cpu() codebase to NUMA-aware
> version with for_each_numa_hop_mask() is difficult because it doesn't
> take a user-provided cpu mask, and eventually ends up with open-coded
> double loop. With for_each_numa_cpu() it shouldn't be a brainteaser.
> Consider the NUMA-ignorant example:
>
> cpumask_t cpus = get_mask();
> int cnt = 0, cpu;
>
> for_each_cpu(cpu, cpus) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> Converting it to NUMA-aware version would be as simple as:
>
> cpumask_t cpus = get_mask();
> int node = get_node();
> int cnt = 0, hop, cpu;
>
> for_each_numa_cpu(cpu, hop, node, cpus) {
> if (cnt++ == max_num)
> break;
> do_something(cpu);
> }
>
> The latter looks more verbose and avoids from open-coding that annoying
> double loop. Another advantage is that it works with a 'hop' parameter with
> the clear meaning of NUMA distance, and doesn't make people not familiar
> to enumerator internals bothering with current and previous masks machinery.
>
> Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx>
> ---
> include/linux/topology.h | 20 ++++++++++++++++++++
> 1 file changed, 20 insertions(+)
>
> diff --git a/include/linux/topology.h b/include/linux/topology.h
> index 4a63154fa036..62a9dd8edd77 100644
> --- a/include/linux/topology.h
> +++ b/include/linux/topology.h
> @@ -286,4 +286,24 @@ sched_numa_hop_mask(unsigned int node, unsigned int hops)
> !IS_ERR_OR_NULL(mask); \
> __hops++)
>
> +/**
> + * for_each_numa_cpu - iterate over cpus in increasing order taking into account
> + * NUMA distances from a given node.
> + * @cpu: the (optionally unsigned) integer iterator
> + * @hop: the iterator variable, must be initialized to a desired minimal hop.
> + * @node: the NUMA node to start the search from.
> + *
> + * Requires rcu_lock to be held.

The comments below are incorrect (copy-paste error). I'll remove them in v2.

> + *
> + * Because it's implemented as double-loop, using 'break' inside the body of
> + * iterator may lead to undefined behaviour. Use 'goto' instead.
> + *
> + * Yields intersection of @mask and cpu_online_mask if @node == NUMA_NO_NODE.
> + */
> +#define for_each_numa_cpu(cpu, hop, node, mask) \
> + for ((cpu) = 0, (hop) = 0; \
> + (cpu) = sched_numa_find_next_cpu((mask), (cpu), (node), &(hop)),\
> + (cpu) < nr_cpu_ids; \
> + (cpu)++)
> +
> #endif /* _LINUX_TOPOLOGY_H */
> --
> 2.34.1