Re: [PATCH v7 2/3] arm64: implement ftrace with regs

From: Ard Biesheuvel
Date: Tue Jan 22 2019 - 08:55:27 EST


On Tue, 22 Jan 2019 at 14:28, Torsten Duwe <duwe@xxxxxx> wrote:
>
> On Tue, Jan 22, 2019 at 10:18:17AM +0000, Julien Thierry wrote:
> > Hi Torsten,
> >
> > A few suggestions below.
> >
> > > +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> > > +#define ARCH_SUPPORTS_FTRACE_OPS 1
> > > +#define REC_IP_BRANCH_OFFSET AARCH64_INSN_SIZE
> > > +/* All we need is some magic value. Simply use "_mCount:" */
> >
> > Nit: Should the casing be "_mcount" ?
>
> No! The string makes it clear what it's supposed to be and the peculiar
> casing makes it unique and leaves no doubt were it came from. So whenever
> you see this register value in a crash dump you don't have to wonder about
> weird linkage errors, as it surely did not originate from a symtab.
>

In that case, do you need to deal with endianness here?

> > > +#define MCOUNT_ADDR (0x5f6d436f756e743a)
> > > +#else
> > > +#define REC_IP_BRANCH_OFFSET 0
> > > +#define MCOUNT_ADDR ((unsigned long)_mcount)
> > > +#endif
> > > +
> > > #ifndef __ASSEMBLY__
> > > #include <linux/compat.h>
> > >
> > > --- a/arch/arm64/kernel/entry-ftrace.S
> > > +++ b/arch/arm64/kernel/entry-ftrace.S
> > > @@ -10,6 +10,7 @@
> > > */
> > >
> > > #include <linux/linkage.h>
> > > +#include <asm/asm-offsets.h>
> > > #include <asm/assembler.h>
> > > #include <asm/ftrace.h>
> > > #include <asm/insn.h>
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > > +ENTRY(ftrace_common)
> > > +
> > > + mov x3, sp /* pt_regs are @sp */
> > > + ldr_l x2, function_trace_op, x0
> > > + mov x1, x9 /* parent IP */
> > > + sub x0, lr, #8 /* function entry == IP */
> >
> > The #8 is because we go back two instructions right? Can we use
> > #(AARCH64_INSN_SIZE * 2) instead?
>
> Hmm, <asm/insn.h> was already included, so why not. (same below)
>
> > > +#ifdef CONFIG_DYNAMIC_FTRACE_WITH_REGS
> > > +int ftrace_modify_call(struct dyn_ftrace *rec, unsigned long old_addr,
> > > + unsigned long addr)
> > > +{
> > > + unsigned long pc = rec->ip + REC_IP_BRANCH_OFFSET;
> > > + u32 old, new;
> > > +
> > > + old = aarch64_insn_gen_branch_imm(pc, old_addr, true);
> > > + new = aarch64_insn_gen_branch_imm(pc, addr, true);
> >
> > The last argument of aarch64_insn_gen_branch_imm() is an enum, not a
> > boolean.
> >
> > You should use AARCH64_INSN_BRANCH_LINK here which happens to be equal to 1.
>
> That's what you get when you keep copying code...
> Good catch, thanks!
>
> > > + * initially; the NOPs are already there. So instead,
> > > + * put the LR saver there ahead of time, in order to avoid
> > > + * any race condition over patching 2 instructions.
> > > + */
> > > + if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
> > > + addr == MCOUNT_ADDR) {
> >
> > This works, however it feels a bit weird since core code asked to
> > generate a NOP but not only we don't generate it but we put another
> > instruction instead.
>
> It's not the first time weird x86 sets the standards and all others
> need workarounds. But here it just came handy to abuse this call :-)
>
> > I think it would be useful to state that the replacement of mcount calls
> > by nops is only ever done once at system initialization.
> >
> > Or maybe have a intermediate function:
> >
> > static int ftrace_setup_patchable_entry(unsigned long addr)
> > {
> > u32 old, new;
> >
> > old = aarch64_insn_gen_nop();
> > new = MOV_X9_X30;
> > pc -= REC_IP_BRANCH_OFFSET;
> > return ftrace_modify_code(pc, old, new, validate);
> > }
> >
> > [...]
> >
> > if (IS_ENABLED(CONFIG_DYNAMIC_FTRACE_WITH_REGS) &&
> > addr == MCOUNT_ADDR)
> > return ftrace_setup_patchable_entry(pc);
> >
> >
> > This way it clearly show that this is a setup/init corner case.
>
> I'll definitely consider this.
>
> Thanks for the input,
>
> Torsten
>
>