Re: [PATCH 2/2] sched/fair: Cleanup in migrate_degrades_locality() to improve readability

From: Swapnil Sapkal
Date: Wed Jun 21 2023 - 00:44:33 EST


Hello Peter,

On 6/19/2023 3:15 PM, Peter Zijlstra wrote:
On Wed, Jun 14, 2023 at 10:22:24AM +0000, Swapnil Sapkal wrote:
The migrate_degrades_locality() returns tristate value whether
the migration will improve locality, degrades locality or no
impact. Handle this return values with enum to improve the
readability.

I can see how you ended up there, that tristate is weird, but perhaps
don't make it more complicated than it should be?

---
kernel/sched/fair.c | 39 ++++++++++++++++++++-------------------
1 file changed, 20 insertions(+), 19 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 373ff5f55884..a8449f594348 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8446,42 +8446,42 @@ static int task_hot(struct task_struct *p, struct lb_env *env)
#ifdef CONFIG_NUMA_BALANCING
/*
* Returns 1, if task migration degrades locality
- * Returns 0, if task migration improves locality i.e migration preferred.
- * Returns -1, if task migration is not affected by locality.
+ * Returns 0, if task migration is not affected by locality.
+ * Returns -1, if task migration improves locality i.e migration preferred.
*/
Because of the following hunk:

@@ -8492,14 +8492,14 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
dst_weight = task_weight(p, dst_nid, dist);
}
- return dst_weight < src_weight;
+ return src_weight - dst_weight;
}

I suppose we should also change the comment to:
/*
* Returns a positive value, if task migration degrades locality
* Returns 0, if task migration is not affected by locality.
* Returns a negative value, if task migration improves locality i.e migration preferred.
*/

Do I need to resend v2 with your changes for this patchset?

-static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
+static long migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
{
struct numa_group *numa_group = rcu_dereference(p->numa_group);
unsigned long src_weight, dst_weight;
int src_nid, dst_nid, dist;
if (!static_branch_likely(&sched_numa_balancing))
- return -1;
+ return 0;
if (!p->numa_faults || !(env->sd->flags & SD_NUMA))
- return -1;
+ return 0;
src_nid = cpu_to_node(env->src_cpu);
dst_nid = cpu_to_node(env->dst_cpu);
if (src_nid == dst_nid)
- return -1;
+ return 0;
/* Migrating away from the preferred node is always bad. */
if (src_nid == p->numa_preferred_nid) {
if (env->src_rq->nr_running > env->src_rq->nr_preferred_running)
return 1;
else
- return -1;
+ return 0;
}
/* Encourage migration to the preferred node. */
if (dst_nid == p->numa_preferred_nid)
- return 0;
+ return -1;
/* Leaving a core idle is often worse than degrading locality. */
if (env->idle == CPU_IDLE)
- return -1;
+ return 0;
dist = node_distance(src_nid, dst_nid);
if (numa_group) {
@@ -8492,14 +8492,14 @@ static int migrate_degrades_locality(struct task_struct *p, struct lb_env *env)
dst_weight = task_weight(p, dst_nid, dist);
}
- return dst_weight < src_weight;
+ return src_weight - dst_weight;
}
--
Thanks and regards,
Swapnil