Re: [Patch v3 2/6] sched/topology: Record number of cores in sched group

From: Tim Chen
Date: Tue Jul 11 2023 - 12:34:45 EST


On Tue, 2023-07-11 at 13:31 +0200, Peter Zijlstra wrote:
> On Mon, Jul 10, 2023 at 03:40:34PM -0700, Tim Chen wrote:
> > On Fri, 2023-07-07 at 15:57 -0700, Tim Chen wrote:
> > > From: Tim C Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > >
> > > When balancing sibling domains that have different number of cores,
> > > tasks in respective sibling domain should be proportional to the number
> > > of cores in each domain. In preparation of implementing such a policy,
> > > record the number of tasks in a scheduling group.
> >
> > Caught a typo. Should be "the number of cores" instead of
> > "the number of tasks" in a scheduling group.
> >
> > Peter, should I send you another patch with the corrected commit log?
>
> I'll fix it up, already had to fix the patch because due to robot
> finding a compile fail for SCHED_SMT=n builds.
>
>
>
> > > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu)
> > > static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
> > > {
> > > struct sched_group *sg = sd->groups;
> > > + struct cpumask *mask = sched_domains_tmpmask2;
> > >
> > > WARN_ON(!sg);
> > >
> > > do {
> > > - int cpu, max_cpu = -1;
> > > + int cpu, cores = 0, max_cpu = -1;
> > >
> > > sg->group_weight = cpumask_weight(sched_group_span(sg));
> > >
> > > + cpumask_copy(mask, sched_group_span(sg));
> > > + for_each_cpu(cpu, mask) {
> > > + cores++;
> #ifdef CONFIG_SCHED_SMT
> > > + cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
> #else
> __cpumask_clear_cpu(cpu, mask);

Thanks for fixing up the non SCHED_SMT.

I think "__cpumask_clear_cpu(cpu, mask);" can be removed.

Since we have already considered the CPU in the iterator, clearing it
is unnecessay. So effectively

for_each_cpu(cpu, mask) {
cores++;
}

should be good enough for the non SCHED_SMT case.

Or replace the patch with the patch below so we don't
have #ifdef in the middle of code body. Either way
is fine.

---