Re: [PATCH] x86/mm/tlb: Remove flush_tlb_info from the stack

From: Andy Lutomirski
Date: Tue Apr 23 2019 - 13:24:06 EST


On Tue, Apr 23, 2019 at 9:56 AM Nadav Amit <namit@xxxxxxxxxx> wrote:
>
> > On Apr 23, 2019, at 9:50 AM, Andy Lutomirski <luto@xxxxxxxxxx> wrote:
> >
> > On Tue, Apr 23, 2019 at 12:12 AM Nadav Amit <namit@xxxxxxxxxx> wrote:
>https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/log/?h=x86/fixes >> Remove flush_tlb_info variables from the stack. This allows to align
> >> flush_tlb_info to cache-line and avoid potentially unnecessary cache
> >> line movements. It also allows to have a fixed virtual-to-physical
> >> translation of the variables, which reduces TLB misses.
> >>
> >> Use per-CPU struct for flush_tlb_mm_range() and
> >> flush_tlb_kernel_range(). Add debug assertions to ensure there are
> >> no nested TLB flushes that might overwrite the per-CPU data. For
> >> arch_tlbbatch_flush(), use a const struct.
> >>
> >> Results when running a microbenchmarks that performs 10^6 MADV_DONTEED
> >> operations and touching a page, in which 3 additional threads run a
> >> busy-wait loop (5 runs):
> >
> > Can you add a memset(,,,. 0, sizeof(struct flush_tlb_info)) everywhere
> > you grab it? Or, even better, perhaps do something like:
> >
> > static inline struct flush_tlb_info *get_flush_tlb_info(void)
> > {
> > /* check reentrancy, make sure that we use smp_processor_id() or
> > otherwise assert that we're bound to a single CPU. */
> > struct flush_tlb_info *ptr = this_cpu_ptr(...);
> > memset(ptr, 0, sizeof(*ptr));
> > return ptr;
> > }
> >
> > static inline void put_flush_tlb_info(void)
> > {
> > /* finish checking reentrancy. */
> > }
>
> Iâll check if the compiler is smart enough to avoid redundant assignments,
> and if it is not, Iâll just give all the struct arguments to
> get_flush_tlb_info() instead of memset() if you donât mind.

Sounds good.

>
> I also want to give a try for parallelizing the remote and local
> invocations, which really annoys me every time I look at the code.

Yes please!