Re: [PATCH v2 27/37] arm64: mte: Switch GCR_EL1 in kernel entry and exit

From: Catalin Marinas
Date: Thu Sep 17 2020 - 13:05:03 EST


On Thu, Sep 17, 2020 at 05:52:21PM +0100, Catalin Marinas wrote:
> On Tue, Sep 15, 2020 at 11:16:09PM +0200, Andrey Konovalov wrote:
> > diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> > index ff34461524d4..79a6848840bd 100644
> > --- a/arch/arm64/kernel/entry.S
> > +++ b/arch/arm64/kernel/entry.S
> > @@ -175,6 +175,28 @@ alternative_else_nop_endif
> > #endif
> > .endm
> >
> > + .macro mte_restore_gcr, el, tsk, tmp, tmp2
> > +#ifdef CONFIG_ARM64_MTE
> > +alternative_if_not ARM64_MTE
> > + b 1f
> > +alternative_else_nop_endif
> > + .if \el == 0
> > + ldr \tmp, [\tsk, #THREAD_GCR_EL1_USER]
> > + .else
> > + ldr_l \tmp, gcr_kernel_excl
> > + .endif
> > + /*
> > + * Calculate and set the exclude mask preserving
> > + * the RRND (bit[16]) setting.
> > + */
> > + mrs_s \tmp2, SYS_GCR_EL1
> > + bfi \tmp2, \tmp, #0, #16
> > + msr_s SYS_GCR_EL1, \tmp2
> > + isb
> > +1:
> > +#endif
> > + .endm
> > +
> > .macro kernel_entry, el, regsize = 64
> > .if \regsize == 32
> > mov w0, w0 // zero upper 32 bits of x0
> > @@ -214,6 +236,8 @@ alternative_else_nop_endif
> >
> > ptrauth_keys_install_kernel tsk, x20, x22, x23
> >
> > + mte_restore_gcr 1, tsk, x22, x23
> > +
> > scs_load tsk, x20
> > .else
> > add x21, sp, #S_FRAME_SIZE
> > @@ -332,6 +356,8 @@ alternative_else_nop_endif
> > /* No kernel C function calls after this as user keys are set. */
> > ptrauth_keys_install_user tsk, x0, x1, x2
> >
> > + mte_restore_gcr 0, tsk, x0, x1
>
> Some nitpicks on these macros to match the ptrauth_keys_* above. Define
> separate mte_set_{user,kernel}_gcr macros with a common mte_set_gcr that
> is used by both.

One more thing - the new mte_set_kernel_gcr should probably skip the
GCR_EL1 update if KASAN_HW_TAGS is disabled.

--
Catalin