Re: [PATCH v3 1/3] sched/fair: Add asymmetric CPU capacity wakeup scan

From: Dietmar Eggemann
Date: Wed Jan 29 2020 - 06:05:00 EST


On 26/01/2020 21:09, Valentin Schneider wrote:

[...]

> +static int select_idle_capacity(struct task_struct *p, int target)
> +{
> + unsigned long best_cap = 0;
> + struct sched_domain *sd;
> + struct cpumask *cpus;
> + int best_cpu = -1;
> + struct rq *rq;
> + int cpu;
> +
> + if (!static_branch_unlikely(&sched_asym_cpucapacity))
> + return -1;
> +
> + sd = rcu_dereference(per_cpu(sd_asym_cpucapacity, target));
> + if (!sd)
> + return -1;
> +
> + sync_entity_load_avg(&p->se);
> +
> + cpus = this_cpu_cpumask_var_ptr(select_idle_mask);
> + cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr);
> +
> + for_each_cpu_wrap(cpu, cpus, target) {
> + rq = cpu_rq(cpu);
> +
> + if (!available_idle_cpu(cpu))
> + continue;
> + if (task_fits_capacity(p, rq->cpu_capacity))
> + return cpu;
> +
> + /*
> + * It would be silly to keep looping when we've found a CPU
> + * of highest available capacity. Just check that it's not been
> + * too pressured lately.
> + */
> + if (rq->cpu_capacity_orig == READ_ONCE(rq->rd->max_cpu_capacity) &&

There is a similar check in check_misfit_status(). Common helper function?

> + !check_cpu_capacity(rq, sd))
> + return cpu;

I wonder how this special treatment of a big CPU behaves in (LITTLE,
medium, big) system like Pixel4 (Snapdragon 855):

flame:/ $ cat /sys/devices/system/cpu/cpu*/cpu_capacity

261
261
261
261
871
871
871
1024

Or on legacy systems where the sd->imbalance_pct is 25% instead of 17%?