Re: [PATCH v2] kernel/trace: Fix cleanup logic of enable_trace_eprobe

From: Tzvetomir Stoyanov
Date: Sun Jul 02 2023 - 23:47:36 EST


On Sun, Jul 2, 2023 at 5:50 PM Masami Hiramatsu <mhiramat@xxxxxxxxxx> wrote:
>
> On Sat, 1 Jul 2023 09:02:54 -0400
> Steven Rostedt <rostedt@xxxxxxxxxxx> wrote:
>
> > On Fri, 30 Jun 2023 15:16:27 +0300
> > "Tzvetomir Stoyanov (VMware)" <tz.stoyanov@xxxxxxxxx> wrote:
> >
> >
> > Hi Tzvetomir,
> >
> > FYI, linux-trace-devel is for the tracing user space code, please Cc to
> > linux-trace-kernel for kernel patches. That makes it fall into the
> > proper patchwork.
> >
> > I noticed this because I couldn't find your patch in:
> >
> > https://patchwork.kernel.org/project/linux-trace-kernel/list/
> >
> > Also, the Subject should just start with "tracing:".
> >
> > > The enable_trace_eprobe() function enables all event probes, attached
> > > to given trace probe. If an error occurs in enabling one of the event
> > > probes, all others should be roll backed. There is a bug in that roll
> > > back logic - instead of all event probes, only the failed one is
> > > disabled.
> > >
> > > Reported-by: Dan Carpenter <dan.carpenter@xxxxxxxxxx>
> > > Fixes: 7491e2c44278 ("tracing: Add a probe that attaches to trace events")
> > > Signed-off-by: Tzvetomir Stoyanov (VMware) <tz.stoyanov@xxxxxxxxx>
> > > ---
> > > v2: Added one-time warning, as suggested by Steven Rostedt.
> >
> > It's always a nice touch (optional, but something I always do) to
> > add a link to the previous version:
> >
> > Changes since v2: https://lore.kernel.org/all/20230628121811.338655-1-tz.stoyanov@xxxxxxxxx/
> > - Added one-time warning (Steven Rostedt)
> >
> > >
> > > kernel/trace/trace_eprobe.c | 14 ++++++++++++--
> > > 1 file changed, 12 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
> > > index 67e854979d53..6629fa217c99 100644
> > > --- a/kernel/trace/trace_eprobe.c
> > > +++ b/kernel/trace/trace_eprobe.c
> > > @@ -702,8 +702,18 @@ static int enable_trace_eprobe(struct trace_event_call *call,
> > >
> > > if (ret) {
> > > /* Failed to enable one of them. Roll back all */
> > > - if (enabled)
> > > - disable_eprobe(ep, file->tr);
> > > + if (enabled) {
> > > + /*
> > > + * It's a bug if one failed for something other than memory
> > > + * not being available but another eprobe succeeded.
> > > + */
> > > + WARN_ON_ONCE(ret != -ENOMEM);
> > > +
> > > + list_for_each_entry(pos, trace_probe_probe_list(tp), list) {
> > > + ep = container_of(pos, struct trace_eprobe, tp);
> > > + disable_eprobe(ep, file->tr);
> > > + }
> >
> > I think we may need the counter again ;-)
> >
> > But for another reason. We only want to call disable for what we
> > enabled, to avoid any unforeseen side effects.
> >
> >
> > cnt = 0;
> > list_for_each_entry(pos, trace_probe_probe_list(tp), list) {
> > ep = container_of(pos, struct trace_eprobe, tp);
> > ret = enable_eprobe(ep, file);
> > if (ret)
> > break;
> > enabled = true;
> > cnt++;
> > }
> >
> > if (ret) {
> > /* Failed to enable one of them. Roll back all */
> > if (enabled) {
> > list_for_each_entry(pos, trace_probe_probe_list(tp), list) {
> > ep = container_of(pos, struct trace_eprobe, tp);
> > disable_eprobe(ep, file->tr);
> > if (!--cnt)
> > break;
> > }
> > }
>
> +1. It seems that enable_eprobe() doesn't change ep, we need a counter to
> count how many eprobes are enabled in the first loop for roll-back the
> already enabled eprobes in the 2nd loop.
>

Ok, I'll send v3 with the counter, although I think it is a bit
overengineering - that optimization is in code that is unlikely to be
executed.

> Thank you,
>
>
> >
> > Thoughts?
> >
> > -- Steve
> >
> >
> >
> > > + }
> > > if (file)
> > > trace_probe_remove_file(tp, file);
> > > else
> >
>
>
> --
> Masami Hiramatsu (Google) <mhiramat@xxxxxxxxxx>



--
Tzvetomir (Ceco) Stoyanov
VMware Open Source Technology Center