[PATCH v2 0/3] sched/fair: Reduce contention on tg's load_avg

From: Waiman Long
Date: Wed Dec 02 2015 - 13:42:27 EST


v1->v2:
- Make a memcache for task_group to make sure that the allocated
task_group object will always be on cacheline boundary even if
debugging is turned on.
- Scrap the original patch 3 and replace it with another one to
disable load_avg update for root_task_group.

This patch series tries to reduce contention on task_group's load_avg
to improve system performance. It also tries to optimize the use of
idle_cpu() call in update_sg_lb_stats().

Waiman Long (3):
sched/fair: Avoid redundant idle_cpu() call in update_sg_lb_stats()
sched/fair: Move hot load_avg into its own cacheline
sched/fair: Disable tg load_avg update for root_task_group

kernel/sched/core.c | 36 ++++++++++++++++++++++++++++++++++--
kernel/sched/fair.c | 16 +++++++++++++---
kernel/sched/sched.h | 7 ++++++-
3 files changed, 53 insertions(+), 6 deletions(-)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/