[bug report] sched/fair: Prefer prev cpu in asymmetric wakeup path

From: Dan Carpenter
Date: Fri Nov 13 2020 - 03:47:19 EST


Hello Vincent Guittot,

The patch b4c9c9f15649: "sched/fair: Prefer prev cpu in asymmetric
wakeup path" from Oct 29, 2020, leads to the following static checker
warning:

kernel/sched/fair.c:6249 select_idle_sibling()
error: uninitialized symbol 'task_util'.

kernel/sched/fair.c
6233 static int select_idle_sibling(struct task_struct *p, int prev, int target)
6234 {
6235 struct sched_domain *sd;
6236 unsigned long task_util;
6237 int i, recent_used_cpu;
6238
6239 /*
6240 * On asymmetric system, update task utilization because we will check
6241 * that the task fits with cpu's capacity.
6242 */

The original comment was a bit more clear... Perhaps "On asymmetric
system[s], [record the] task utilization because we will check that the
task [can be done within] the cpu's capacity."

6243 if (static_branch_unlikely(&sched_asym_cpucapacity)) {
6244 sync_entity_load_avg(&p->se);
6245 task_util = uclamp_task_util(p);
6246 }

"task_util" is not initialized on the else path.

6247
6248 if ((available_idle_cpu(target) || sched_idle_cpu(target)) &&
6249 asym_fits_capacity(task_util, target))
^^^^^^^^^
Uninitialized variable warning.

6250 return target;
6251
6252 /*
6253 * If the previous CPU is cache affine and idle, don't be stupid:
6254 */
6255 if (prev != target && cpus_share_cache(prev, target) &&
6256 (available_idle_cpu(prev) || sched_idle_cpu(prev)) &&
6257 asym_fits_capacity(task_util, prev))
6258 return prev;
6259
6260 /*
6261 * Allow a per-cpu kthread to stack with the wakee if the

regards,
dan carpenter