[PATCH v3 2/7] lib/group_cpus: optimize inner loop in grp_spread_init_one()

From: Yury Norov
Date: Mon Dec 11 2023 - 23:21:22 EST


The loop starts from the beginning every time we switch to the next
sibling mask. This is the Schlemiel the Painter's style of coding
because we know for sure that nmsk is clear up to current CPU, and we
can just continue from the next CPU.

Also, we can do it nicer if leverage the dedicated for_each() iterator,
and simplify the logic of clearing a bit in nmsk.

Signed-off-by: Yury Norov <yury.norov@xxxxxxxxx>
---
lib/group_cpus.c | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/lib/group_cpus.c b/lib/group_cpus.c
index ee272c4cefcc..10dead3ab0e0 100644
--- a/lib/group_cpus.c
+++ b/lib/group_cpus.c
@@ -30,14 +30,13 @@ static void grp_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk,

/* If the cpu has siblings, use them first */
siblmsk = topology_sibling_cpumask(cpu);
- for (sibl = -1; cpus_per_grp > 0; ) {
- sibl = cpumask_next(sibl, siblmsk);
- if (sibl >= nr_cpu_ids)
- break;
- if (!cpumask_test_and_clear_cpu(sibl, nmsk))
- continue;
+ sibl = cpu + 1;
+
+ for_each_cpu_and_from(sibl, siblmsk, nmsk) {
+ cpumask_clear_cpu(sibl, nmsk);
cpumask_set_cpu(sibl, irqmsk);
- cpus_per_grp--;
+ if (cpus_per_grp-- == 0)
+ return;
}
}
}
--
2.40.1