Re: [PATCH v2 00/19] Fixes for sched/numa_balancing

From: Peter Zijlstra
Date: Mon Jul 23 2018 - 09:57:11 EST


On Wed, Jun 20, 2018 at 10:32:41PM +0530, Srikar Dronamraju wrote:
> Srikar Dronamraju (19):
> sched/numa: Remove redundant field.
> sched/numa: Evaluate move once per node
> sched/numa: Simplify load_too_imbalanced
> sched/numa: Set preferred_node based on best_cpu
> sched/numa: Use task faults only if numa_group is not yet setup
> sched/debug: Reverse the order of printing faults
> sched/numa: Skip nodes that are at hoplimit
> sched/numa: Remove unused task_capacity from numa_stats
> sched/numa: Modify migrate_swap to accept additional params
> sched/numa: Restrict migrating in parallel to the same node.
> sched/numa: Remove numa_has_capacity
> sched/numa: Use group_weights to identify if migration degrades locality
> sched/numa: Move task_placement closer to numa_migrate_preferred

I took the above, but left the below for next time.

> sched/numa: Stop multiple tasks from moving to the cpu at the same time
> mm/migrate: Use xchg instead of spinlock
> sched/numa: Updation of scan period need not be in lock
> sched/numa: Detect if node actively handling migration
> sched/numa: Pass destination cpu as a parameter to migrate_task_rq
> sched/numa: Reset scan rate whenever task moves across nodes