Re: [PATCH v2] sched/task_group: Re-layout structure to reduce false sharing

From: Aaron Lu
Date: Mon Jun 26 2023 - 01:48:27 EST


On Wed, Jun 21, 2023 at 04:14:25PM +0800, Deng Pan wrote:
> When running UnixBench/Pipe-based Context Switching case, we observed
> high false sharing for accessing ‘load_avg’ against rt_se and rt_rq
> when config CONFIG_RT_GROUP_SCHED is turned on.
>
> Pipe-based Context Switching case is a typical sleep/wakeup scenario,
> in which load_avg is frequenly loaded and stored, at the meantime, rt_se
> and rt_rq are frequently loaded. Unfortunately, they are in the same
> cacheline.
>
> This change re-layouts the structure:
> 1. Move rt_se and rt_rq to a 2nd cacheline.
> 2. Keep ‘parent’ field in the 2nd cacheline since it's also accessed
> very often when cgroups are nested, thanks Tim Chen for providing the
> insight.
>
> Tested on Intel Icelake 2 sockets 80C/160T platform, based on v6.4-rc5.
>
> With this change, Pipe-based Context Switching 160 parallel score is
> improved ~9.6%, perf record tool reports rt_se and rt_rq access cycles
> are reduced from ~14.5% to ~0.3%, perf c2c tool shows the false-sharing
> is resolved as expected:

I also give it a run on an Icelake and saw similar things when
CONFIG_RT_GROUP_SCHED is on.

For hackbench/pipe/thread, set_task_cpu() dropped from 1.67% to 0.51%
according to perf cycle; for netperf/nr_client=nr_cpu/UDP_RR,
set_task_cpu() dropped from 5.06% to 1.08%.

The patch looks good to me, just a nit below.

> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index ec7b3e0a2b20..4fbd4b3a4bdd 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -389,6 +389,19 @@ struct task_group {
> #endif
> #endif
>
> + struct rcu_head rcu;
> + struct list_head list;
> +
> + struct list_head siblings;
> + struct list_head children;
> +
> + /*
> + * To reduce false sharing, current layout is optimized to make
> + * sure load_avg is in a different cacheline from parent, rt_se
> + * and rt_rq.
> + */
> + struct task_group *parent;
> +

I wonder if we can simply do:

diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index ec7b3e0a2b20..31b73e8d9568 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -385,7 +385,9 @@ struct task_group {
* it in its own cacheline separated from the fields above which
* will also be accessed at each tick.
*/
- atomic_long_t load_avg ____cacheline_aligned;
+ struct {
+ atomic_long_t load_avg;
+ } ____cacheline_aligned_in_smp;
#endif
#endif

This way it can make sure there is no false sharing with load_avg no
matter how the layout of this structure changes in the future.

Your patch has the advantage of not adding any more padding, thus saves
some space; the example code above has the advantage of no need to worry
about future changes that might break the expected alignment, but it
does make the structure size a little larger(704 -> 768).

Thanks,
Aaron

> #ifdef CONFIG_RT_GROUP_SCHED
> struct sched_rt_entity **rt_se;
> struct rt_rq **rt_rq;
> @@ -396,13 +409,6 @@ struct task_group {
> struct rt_bandwidth rt_bandwidth;
> #endif
>
> - struct rcu_head rcu;
> - struct list_head list;
> -
> - struct task_group *parent;
> - struct list_head siblings;
> - struct list_head children;
> -
> #ifdef CONFIG_SCHED_AUTOGROUP
> struct autogroup *autogroup;
> #endif
> --
> 2.39.3
>