Re: [PATCH v2] percpu-internal/pcpu_chunk: Re-layout pcpu_chunk structure to reduce false sharing

From: Dennis Zhou
Date: Fri Jun 09 2023 - 14:21:07 EST


Hi Yu,

On Wed, Jun 07, 2023 at 03:02:32PM +0000, Ma, Yu wrote:
> Thanks Liam and Dennis for review, this is updated patch with comment around:
>
> > When running UnixBench/Execl throughput case, false sharing is observed
> > due to frequent read on base_addr and write on free_bytes, chunk_md.
> >
> > UnixBench/Execl represents a class of workload where bash scripts are
> > spawned frequently to do some short jobs. It will do system call on execl
> > frequently, and execl will call mm_init to initialize mm_struct of the process.
> > mm_init will call __percpu_counter_init for percpu_counters initialization.
> > Then pcpu_alloc is called to read the base_addr of pcpu_chunk for memory
> > allocation. Inside pcpu_alloc, it will call pcpu_alloc_area to allocate memory
> > from a specified chunk.
> > This function will update "free_bytes" and "chunk_md" to record the rest
> > free bytes and other meta data for this chunk. Correspondingly,
> > pcpu_free_area will also update these 2 members when free memory.
> > Call trace from perf is as below:
> > + 57.15% 0.01% execl [kernel.kallsyms] [k] __percpu_counter_init
> > + 57.13% 0.91% execl [kernel.kallsyms] [k] pcpu_alloc
> > - 55.27% 54.51% execl [kernel.kallsyms] [k] osq_lock
> > - 53.54% 0x654278696e552f34
> > main
> > __execve
> > entry_SYSCALL_64_after_hwframe
> > do_syscall_64
> > __x64_sys_execve
> > do_execveat_common.isra.47
> > alloc_bprm
> > mm_init
> > __percpu_counter_init
> > pcpu_alloc
> > - __mutex_lock.isra.17
> >
> > In current pcpu_chunk layout, ‘base_addr’ is in the same cache line with
> > ‘free_bytes’ and ‘chunk_md’, and ‘base_addr’ is at the last 8 bytes. This
> > patch moves ‘bound_map’ up to ‘base_addr’, to let ‘base_addr’ locate in a
> > new cacheline.
> >
> > With this change, on Intel Sapphire Rapids 112C/224T platform, based on
> > v6.4-rc4, the 160 parallel score improves by 24%.
> >
> > Reviewed-by: Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx>
> > Signed-off-by: Yu Ma <yu.ma@xxxxxxxxx>
> > ---
> > mm/percpu-internal.h | 8 +++++++-
> > 1 file changed, 7 insertions(+), 1 deletion(-)
> >
> > diff --git a/mm/percpu-internal.h b/mm/percpu-internal.h index
> > f9847c131998..ecc7be1ec876 100644
> > --- a/mm/percpu-internal.h
> > +++ b/mm/percpu-internal.h
> > @@ -41,10 +41,16 @@ struct pcpu_chunk {
> > struct list_head list; /* linked to pcpu_slot lists */
> > int free_bytes; /* free bytes in the chunk */
> > struct pcpu_block_md chunk_md;
> > + unsigned long *bound_map; /* boundary map */
> > +
> > + /*
> > + * To reduce false sharing, current layout is optimized to make sure
> > + * base_addr locate in the different cacheline with free_bytes and
> > + * chunk_md.
> > + */
> > void *base_addr; /* base address of this chunk
> > */
> >
> > unsigned long *alloc_map; /* allocation map */
> > - unsigned long *bound_map; /* boundary map */
> > struct pcpu_block_md *md_blocks; /* metadata blocks */
> >
> > void *data; /* chunk data */
> > --
> > 2.39.3
>

Thanks for adding the comment, but would you mind adding
____cacheline_aligned_in_smp? Unless that's something we're trying to
avoid, I think this is a good use case for it both on the pcpu_chunk and
specifically on base_addr as that's what we're accessing without a lock.

Thanks,
Dennis