Re: [Patch v3 2/6] sched/topology: Record number of cores in sched group

From: Peter Zijlstra
Date: Tue Jul 11 2023 - 07:32:46 EST


On Mon, Jul 10, 2023 at 03:40:34PM -0700, Tim Chen wrote:
> On Fri, 2023-07-07 at 15:57 -0700, Tim Chen wrote:
> > From: Tim C Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> >
> > When balancing sibling domains that have different number of cores,
> > tasks in respective sibling domain should be proportional to the number
> > of cores in each domain. In preparation of implementing such a policy,
> > record the number of tasks in a scheduling group.
>
> Caught a typo. Should be "the number of cores" instead of
> "the number of tasks" in a scheduling group.
>
> Peter, should I send you another patch with the corrected commit log?

I'll fix it up, already had to fix the patch because due to robot
finding a compile fail for SCHED_SMT=n builds.



> > @@ -1275,14 +1275,22 @@ build_sched_groups(struct sched_domain *sd, int cpu)
> > static void init_sched_groups_capacity(int cpu, struct sched_domain *sd)
> > {
> > struct sched_group *sg = sd->groups;
> > + struct cpumask *mask = sched_domains_tmpmask2;
> >
> > WARN_ON(!sg);
> >
> > do {
> > - int cpu, max_cpu = -1;
> > + int cpu, cores = 0, max_cpu = -1;
> >
> > sg->group_weight = cpumask_weight(sched_group_span(sg));
> >
> > + cpumask_copy(mask, sched_group_span(sg));
> > + for_each_cpu(cpu, mask) {
> > + cores++;
#ifdef CONFIG_SCHED_SMT
> > + cpumask_andnot(mask, mask, cpu_smt_mask(cpu));
#else
__cpumask_clear_cpu(cpu, mask);
#endif

or something along them lines -- should be in queue.git/sched/core
already.

> > + }
> > + sg->cores = cores;
> > +
> > if (!(sd->flags & SD_ASYM_PACKING))
> > goto next;
> >
>