Re: [PATCH 2/2] net/mlx5e: Leverage sched_numa_hop_mask()

From: Tariq Toukan
Date: Wed Aug 10 2022 - 08:58:32 EST




On 8/10/2022 1:51 PM, Valentin Schneider wrote:
Signed-off-by: Valentin Schneider <vschneid@xxxxxxxxxx>
---
drivers/net/ethernet/mellanox/mlx5/core/eq.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)


Missing description.

I had a very detailed description with performance numbers and an affinity hints example with before/after tables. I don't want to get them lost.


diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 229728c80233..2eb4ffd96a95 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -809,9 +809,12 @@ static void comp_irqs_release(struct mlx5_core_dev *dev)
static int comp_irqs_request(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = dev->priv.eq_table;
+ const struct cpumask *mask;
int ncomp_eqs = table->num_comp_eqs;
+ int hops = 0;
u16 *cpus;
int ret;
+ int cpu;
int i;
ncomp_eqs = table->num_comp_eqs;
@@ -830,8 +833,17 @@ static int comp_irqs_request(struct mlx5_core_dev *dev)
ret = -ENOMEM;
goto free_irqs;
}
- for (i = 0; i < ncomp_eqs; i++)
- cpus[i] = cpumask_local_spread(i, dev->priv.numa_node);
+
+ rcu_read_lock();
+ for_each_numa_hop_mask(dev->priv.numa_node, hops, mask) {

We don't really use this 'hops' iterator. We always pass 0 (not a valuable input...), and we do not care about its final value. Probably it's best to hide it from the user into the macro.

+ for_each_cpu(cpu, mask) {
+ cpus[i] = cpu;
+ if (++i == ncomp_eqs)
+ goto spread_done;
+ }
+ }
+spread_done:
+ rcu_read_unlock();
ret = mlx5_irqs_request_vectors(dev, cpus, ncomp_eqs, table->comp_irqs);
kfree(cpus);
if (ret < 0)

This logic is typical. Other drivers would also want to use it.
It must be introduced as a service/API function, if not by the sched topology, then at least by the networking subsystem.
Jakub, WDYT?