RE: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record when perf stat needs to get retire latency value for a metric.

From: Wang, Weilin
Date: Wed Mar 13 2024 - 12:24:02 EST




> -----Original Message-----
> From: Andi Kleen <ak@xxxxxxxxxxxxxxx>
> Sent: Wednesday, March 13, 2024 8:55 AM
> To: Wang, Weilin <weilin.wang@xxxxxxxxx>
> Cc: Namhyung Kim <namhyung@xxxxxxxxxx>; Ian Rogers
> <irogers@xxxxxxxxxx>; Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>; Peter
> Zijlstra <peterz@xxxxxxxxxxxxx>; Ingo Molnar <mingo@xxxxxxxxxx>;
> Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>; Jiri Olsa
> <jolsa@xxxxxxxxxx>; Hunter, Adrian <adrian.hunter@xxxxxxxxx>; Kan Liang
> <kan.liang@xxxxxxxxxxxxxxx>; linux-perf-users@xxxxxxxxxxxxxxx; linux-
> kernel@xxxxxxxxxxxxxxx; Taylor, Perry <perry.taylor@xxxxxxxxx>; Alt, Samantha
> <samantha.alt@xxxxxxxxx>; Biggers, Caleb <caleb.biggers@xxxxxxxxx>
> Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record when
> perf stat needs to get retire latency value for a metric.
>
> On Wed, Mar 13, 2024 at 03:31:14PM +0000, Wang, Weilin wrote:
> >
> >
> > > -----Original Message-----
> > > From: Andi Kleen <ak@xxxxxxxxxxxxxxx>
> > > Sent: Tuesday, March 12, 2024 5:56 PM
> > > To: Wang, Weilin <weilin.wang@xxxxxxxxx>
> > > Cc: Namhyung Kim <namhyung@xxxxxxxxxx>; Ian Rogers
> > > <irogers@xxxxxxxxxx>; Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>;
> Peter
> > > Zijlstra <peterz@xxxxxxxxxxxxx>; Ingo Molnar <mingo@xxxxxxxxxx>;
> > > Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>; Jiri Olsa
> > > <jolsa@xxxxxxxxxx>; Hunter, Adrian <adrian.hunter@xxxxxxxxx>; Kan Liang
> > > <kan.liang@xxxxxxxxxxxxxxx>; linux-perf-users@xxxxxxxxxxxxxxx; linux-
> > > kernel@xxxxxxxxxxxxxxx; Taylor, Perry <perry.taylor@xxxxxxxxx>; Alt,
> Samantha
> > > <samantha.alt@xxxxxxxxx>; Biggers, Caleb <caleb.biggers@xxxxxxxxx>
> > > Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record
> when
> > > perf stat needs to get retire latency value for a metric.
> > >
> > > "Wang, Weilin" <weilin.wang@xxxxxxxxx> writes:
> > >
> > > >> -----Original Message-----
> > > >> From: Andi Kleen <ak@xxxxxxxxxxxxxxx>
> > > >> Sent: Tuesday, March 12, 2024 5:03 PM
> > > >> To: Wang, Weilin <weilin.wang@xxxxxxxxx>
> > > >> Cc: Namhyung Kim <namhyung@xxxxxxxxxx>; Ian Rogers
> > > >> <irogers@xxxxxxxxxx>; Arnaldo Carvalho de Melo <acme@xxxxxxxxxx>;
> > > Peter
> > > >> Zijlstra <peterz@xxxxxxxxxxxxx>; Ingo Molnar <mingo@xxxxxxxxxx>;
> > > >> Alexander Shishkin <alexander.shishkin@xxxxxxxxxxxxxxx>; Jiri Olsa
> > > >> <jolsa@xxxxxxxxxx>; Hunter, Adrian <adrian.hunter@xxxxxxxxx>; Kan
> Liang
> > > >> <kan.liang@xxxxxxxxxxxxxxx>; linux-perf-users@xxxxxxxxxxxxxxx; linux-
> > > >> kernel@xxxxxxxxxxxxxxx; Taylor, Perry <perry.taylor@xxxxxxxxx>; Alt,
> > > Samantha
> > > >> <samantha.alt@xxxxxxxxx>; Biggers, Caleb <caleb.biggers@xxxxxxxxx>
> > > >> Subject: Re: [RFC PATCH v4 2/6] perf stat: Fork and launch perf record
> > > when
> > > >> perf stat needs to get retire latency value for a metric.
> > > >>
> > > >> weilin.wang@xxxxxxxxx writes:
> > > >>
> > > >> > From: Weilin Wang <weilin.wang@xxxxxxxxx>
> > > >> >
> > > >> > When retire_latency value is used in a metric formula, perf stat would
> fork
> > > a
> > > >> > perf record process with "-e" and "-W" options. Perf record will collect
> > > >> > required retire_latency values in parallel while perf stat is collecting
> > > >> > counting values.
> > > >>
> > > >> How does that work when the workload is specified on the command
> line?
> > > >> The workload would run twice? That is very inefficient and may not
> > > >> work if it's a large workload.
> > > >>
> > > >> The perf tool infrastructure is imho not up to the task of such
> > > >> parallel collection.
> > > >>
> > > >> Also it won't work for very long collections because you will get a
> > > >> very large perf.data. Better to use a pipeline.
> > > >>
> > > >> I think it would be better if you made it a separate operation that can
> > > >> generate a file that is then consumed by perf stat. This is also more
> efficient
> > > >> because often the calibration is only needed once. And it's all under
> > > >> user control so no nasty surprises.
> > > >>
> > > >
> > > > Workload runs only once with perf stat. Perf record is forked by perf stat
> and
> > > run
> > > > in parallel with perf stat. Perf stat will send perf record a signal to
> terminate
> > > after
> > > > perf stat stops collecting count value.
> > >
> > > I don't understand how the perf record filters on the workload created by
> > > the perf stat. At a minimum you would need -p to connect to the pid
> > > of the parent, but IIRC -p doesnt follow children, so if it forked
> > > it wouldn't work.
> > >
> > > I think your approach may only work with -a, but perhaps I'm missing
> > > something (-a is often not usable due to restrictions)
> > >
> > > Also if perf stat runs in interval mode and you only get the data
> > > at the end how would that work?
> > >
> > > iirc i wrestled with all these questions for toplev (which has a
> > > similar feature) and in the end i concluded doing it automatically
> > > has far too many problems.
> > >
> >
> > Yes, you are completely right that there are limitation that we can only
> support -a, -C
> > and not support on -I now. I'm wondering if we could support "-I" in next
> step by
> > processing sampled data on the go.
>
> -I is very tricky in a separate process. How do you align the two
> intervals on a long runs without drift. I don't know of a reliable
> way to do it in the general case only using time.
>
> Also just the non support for forking workloads without -a is fatal imho. That's
> likely one of the most common cases.
>

We could use -a -C and cgroup together. I think this could be a useful use case.
There could be other improvement to the implementation in next step. But I believe
current implementation could provide users the access to our new feature with
accurate results and without adding too much overhead.

Thanks,
Weilin

> Separate is a far better model imho:
>
> - It is under full user control and no surprises
> - No uncontrolled multiplexing
> - Often it is fine to measure once and cache the data
>
> It cannot deal with -I properly either (short of some form of
> phase detection), but at least it doesn't give false promises
> to that effect.
>
> The way to do it is to have defaults in a json file
> and the user can override them with a calibration step.
> There is a JSON format that is used by some other tools.
>
> This is my implementation:
> https://github.com/andikleen/pmu-tools/blob/master/genretlat.py
> https://github.com/andikleen/pmu-
> tools/blob/89861055b53e57ba0b7c6348745b2fbe6615c068/toplev.py#L10
> 31
>
>
> -Andi