Re: [PATCH 1/8] x86/mm/cpa: Use flush_tlb_all()

From: Thomas Gleixner
Date: Wed Sep 19 2018 - 06:08:38 EST


On Wed, 19 Sep 2018, Peter Zijlstra wrote:
> On Wed, Sep 19, 2018 at 10:50:17AM +0200, Peter Zijlstra wrote:
> > Instead of open-coding it..
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@xxxxxxxxxxxxx>
> > ---
> > arch/x86/mm/pageattr.c | 12 +-----------
> > 1 file changed, 1 insertion(+), 11 deletions(-)
> >
> > --- a/arch/x86/mm/pageattr.c
> > +++ b/arch/x86/mm/pageattr.c
> > @@ -285,16 +285,6 @@ static void cpa_flush_all(unsigned long
> > on_each_cpu(__cpa_flush_all, (void *) cache, 1);
> > }
> >
> > -static void __cpa_flush_range(void *arg)
> > -{
> > - /*
> > - * We could optimize that further and do individual per page
> > - * tlb invalidates for a low number of pages. Caveat: we must
> > - * flush the high aliases on 64bit as well.
> > - */
> > - __flush_tlb_all();
> > -}
>
> Hmm,.. so in patch #4 I do switch to flush_tlb_kernel_range(). What are
> those high aliases that comment talks about?

We have two mappings for the kernel. The 'real one' and the direct mapping
alias and for most of the operations, we have to make sure that the table
entries are identical in both maps.

The comments in that code probably need some care.

Thanks,

tglx