Re: [RFC PATCH v1 2/5] perf stat: Fork and launch perf record when perf stat needs to get retire latency value for a metric.

From: Ian Rogers
Date: Fri Feb 23 2024 - 02:48:06 EST


On Thu, Feb 22, 2024 at 11:03 PM Namhyung Kim <namhyung@xxxxxxxxxx> wrote:
>
> Hi,
>
> On Wed, Feb 21, 2024 at 12:34 PM Ian Rogers <irogers@xxxxxxxxxx> wrote:
> >
> > On Wed, Feb 21, 2024 at 9:52 AM Arnaldo Carvalho de Melo
> > <acme@xxxxxxxxxx> wrote:
> > >
> > > On Wed, Feb 21, 2024 at 02:20:56AM -0500, weilin.wang@xxxxxxxxx wrote:
> > > > From: Weilin Wang <weilin.wang@xxxxxxxxx>
> > >
> > > You wrote no description for this patch, please add one and show
> > > examples of the feature being used, if possible.
> > >
> > > See below for more comments.
> > >
> > > > Signed-off-by: Weilin Wang <weilin.wang@xxxxxxxxx>
> > > > +static int __cmd_script(struct child_process *cmd __maybe_unused)
> > > > +{
> > > > + int err = 0;
> > > > + struct perf_session *session;
> > > > + struct perf_data data = {
> > > > + .mode = PERF_DATA_MODE_READ,
> > > > + .path = PERF_DATA,
> > > > + .fd = cmd->out,
> > > > + };
> > > > + struct perf_script script = {
> > > > + .tool = {
> > > > + .sample = process_sample_event,
> > > > + .ordered_events = true,
> > > > + .ordering_requires_timestamps = true,
> > > > + .feature = process_feature_event,
> > > > + .attr = perf_event__process_attr,
> > > > + },
> > > > + };
> > > > + struct tpebs_event *e;
> > > > +
> > > > + list_for_each_entry(e, &stat_config.tpebs_events, nd) {
> > > > + struct tpebs_retire_lat *new = malloc(sizeof(struct tpebs_retire_lat));
> > > > +
> > > > + if (!new)
> > > > + return -1;
> > > > + new->name = strdup(e->name);
> > > > + new->tpebs_name = strdup(e->tpebs_name);
> > > > + new->count = 0;
> > > > + new->sum = 0;
> > >
> > > Without even having thought that much about this overall architecture,
> > > that looks too heavy at first sight, the above is done in tools/perf/
> > > as:
> > >
> > > void tpebs_retire_lat__delete(struct tpebs_retire_lat *retire_lat)
> > > {
> > > if (retire_lat == NULL)
> > > return;
> > >
> > > zfree(&retire_lat->tpebs_name);
> > > zfree(&retire_lat->tpebs_name);
> > > }
> > >
> > > struct tpebs_retire_lat__new(tpebs_event *e)
> > > {
> > > struct tpebs_retire_lat *retire_lat = zalloc(sizeof(*retire_lat));
> > >
> > > if (retire_lat != NULL) {
> > > retire_lat->name = strdup(e->name);
> > > retire_lat->tpebs_name = strdup(e->tpebs_name);
> > >
> > > if (retire_lat->name == NULL || retire_lat->tpebs_name == NULL) {
> > > tpebs_retire_lat__delete(retire_lat);
> > > retire_lat = NULL;
> > > }
> > > }
> > >
> > > return retire_lat;
> > > }
> > >
> > > And then you call the constructor in that loop, and the destructor at
> > > some point when those data structures are not needed.
> > >
> > > We do it because perf has a TUI mode and we may end up calling tools
> > > from them in a long running session, so we need to avoid leaks.
> > >
> > > Also can we somehow hide Intel specific terms in arch specific files
> > > while leaving something generic, possibly implementable in other arches
> > > in the core code? I mean hiding clearly intel specific stuff like the
> > > "tpebs" term in tools/perf/arch/x86/.
> >
> > Thanks Arnaldo, TPEBS support is necessary to support TMA metrics 4.7
> > on meteorlake. The timed values replace hard coded constants that
> > assume worst case latencies. You can see the metrics here:
> > https://github.com/intel/perfmon/blob/main/TMA_Metrics-full.csv
> > in the MTL (meteorlake) column row 29 there is:
> > MEM_INST_RETIRED.STLB_HIT_LOADS*min($PEBS, #Mem_STLB_Hit_Cost) / CLKS
> > + Load_STLB_Miss
> > where the $PEBS is supposed to be replaced with the latency from a
> > sample of MEM_INST_RETIRED.STLB_HIT_LOADS. There are meteorlake
> > machines shipping but currently there are no perf metrics.
> >
> > Weilin raised the TPEBS problem in the LPC 2023 talk, the issue being
> > that sampling and counting don't really exist in the current perf tool
> > code at the same time. BPF could be a workaround but permissions are
> > an issue. Perhaps leader sampling but then what to do if two latencies
> > are needed. Forking perf to do this is an expedient and ideally we'd
> > not do it.
>
> Even with BPF, I think it needs two instances of an event - one for
> counting and the other for sampling, right? I wonder if it can just
> use a single event for sampling and show the sum of periods in
> PERF_SAMPLE_READ.
>
> I'm not sure if an event group can have sampling and non-sampling
> events at the same time. But it can be done without groups then.
> Anyway what's the issue with two latencies?

The latencies come from samples and with leader sampling only the
leader gets sampled so we can't get two latencies. For 2 latencies
we'd need 2 groups for 2 leaders or to modify leader sampling - if we
could encode that we want to sample but don't need the sample in the
mmap, just want the latency being available to be read, etc. This and
BPF are both long-term viable solutions, but forking is the expedient
solution to get something going - we'd likely want it as a fallback
anyway.

Thanks,
Ian

> Thanks,
> Namhyung
>
> >
> > I'm against stuff going in the arch directory in general. If something
> > is specific to a PMU, let's special case the logic for that PMU in the
> > tool. The arch directory gets us into messes like:
> > https://lore.kernel.org/lkml/CAP-5=fVABSBZnsmtRn1uF-k-G1GWM-L5SgiinhPTfHbQsKXb_g@xxxxxxxxxxxxxx/
> > Where a heterogenous/hybrid/big.little fix is in the x86 arch folder
> > (overriding a weak symbol) but not in the arm or other ones leading to
> > an arm bug.
> >
> > I think the idea of a count coming from an external or tool source is
> > something of a more generic concept. I think what Weilin is doing here
> > is setting the groundwork for doing that, a first implementation. I'd
> > like the expr literals, like #smt_on, #num_cpus, etc. to be more like
> > tool events such as duration_time. I think we can move in the
> > direction that Weilin is adding here and then iterate to clean up
> > these things, hopefully move them to events that then other tools
> > could use, etc.
> >
> > Thanks,
> > Ian
> >
> >
> >
> >
> > > > + list_add_tail(&new->nd, &stat_config.tpebs_results);
> > > > + }
> > > > +
> > > > + kill(cmd->pid, SIGTERM);
> > > > + session = perf_session__new(&data, &script.tool);
> > > > + if (IS_ERR(session))
> > > > + return PTR_ERR(session);
> > > > + script.session = session;
> > > > + err = perf_session__process_events(session);
> > > > + perf_session__delete(session);
> > > > +
> > > > + return err;
> > > > +}
> > > > +
> > > > static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > > > {
> > > > int interval = stat_config.interval;
> > > > @@ -709,12 +866,14 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > > > struct affinity saved_affinity, *affinity = NULL;
> > > > int err;
> > > > bool second_pass = false;
> > > > + struct child_process cmd;
> > > >
> > > > //Prepare perf record for sampling event retire_latency before fork and prepare workload
> > > > if (stat_config.tpebs_event_size > 0) {
> > > > int ret;
> > > >
> > > > - ret = __run_perf_record();
> > > > + pr_debug("perf stat pid = %d\n", getpid());
> > > > + ret = prepare_perf_record(&cmd);
> > > > if (ret)
> > > > return ret;
> > > > }
> > > > @@ -924,6 +1083,17 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
> > > >
> > > > t1 = rdclock();
> > > >
> > > > + if (stat_config.tpebs_event_size > 0) {
> > > > + int ret;
> > > > +
> > > > + pr_debug("pid = %d\n", getpid());
> > > > + pr_debug("cmd.pid = %d\n", cmd.pid);
> > > > +
> > > > + ret = __cmd_script(&cmd);
> > > > + close(cmd.out);
> > > > + pr_debug("%d\n", ret);
> > > > + }
> > > > +
> > > > if (stat_config.walltime_run_table)
> > > > stat_config.walltime_run[run_idx] = t1 - t0;
> > > >
> > > > @@ -2714,6 +2884,7 @@ int cmd_stat(int argc, const char **argv)
> > > > }
> > > >
> > > > INIT_LIST_HEAD(&stat_config.tpebs_events);
> > > > + INIT_LIST_HEAD(&stat_config.tpebs_results);
> > > >
> > > > /*
> > > > * Metric parsing needs to be delayed as metrics may optimize events
> > > > @@ -2921,5 +3092,7 @@ int cmd_stat(int argc, const char **argv)
> > > > metricgroup__rblist_exit(&stat_config.metric_events);
> > > > evlist__close_control(stat_config.ctl_fd, stat_config.ctl_fd_ack, &stat_config.ctl_fd_close);
> > > >
> > > > + tpebs_data__delete();
> > > > +
> > > > return status;
> > > > }
> > > > diff --git a/tools/perf/util/data.c b/tools/perf/util/data.c
> > > > index fc16299c915f..2298ca3b370b 100644
> > > > --- a/tools/perf/util/data.c
> > > > +++ b/tools/perf/util/data.c
> > > > @@ -173,6 +173,10 @@ static bool check_pipe(struct perf_data *data)
> > > > int fd = perf_data__is_read(data) ?
> > > > STDIN_FILENO : STDOUT_FILENO;
> > > >
> > > > + if (data->fd > 0) {
> > > > + fd = data->fd;
> > > > + }
> > > > +
> > > > if (!data->path) {
> > > > if (!fstat(fd, &st) && S_ISFIFO(st.st_mode))
> > > > is_pipe = true;
> > > > diff --git a/tools/perf/util/data.h b/tools/perf/util/data.h
> > > > index effcc195d7e9..5554d46ad212 100644
> > > > --- a/tools/perf/util/data.h
> > > > +++ b/tools/perf/util/data.h
> > > > @@ -28,6 +28,7 @@ struct perf_data_file {
> > > >
> > > > struct perf_data {
> > > > const char *path;
> > > > + int fd;
> > > > struct perf_data_file file;
> > > > bool is_pipe;
> > > > bool is_dir;
> > > > diff --git a/tools/perf/util/metricgroup.c b/tools/perf/util/metricgroup.c
> > > > index 6c16e5a0b1fc..8518e2b3e5be 100644
> > > > --- a/tools/perf/util/metricgroup.c
> > > > +++ b/tools/perf/util/metricgroup.c
> > > > @@ -691,8 +691,17 @@ static int metricgroup__build_event_string(struct strbuf *events,
> > > >
> > > > if (p) {
> > > > struct tpebs_event *new_event = malloc(sizeof(struct tpebs_event));
> > > > - *p = '\0';
> > > > + char *name;
> > > > +
> > > > new_event->tpebs_name = strdup(id);
> > > > + *p = '\0';
> > > > + name = malloc(strlen(id) + 2);
> > > > + if (!name)
> > > > + return -ENOMEM;
> > > > +
> > > > + strcpy(name, id);
> > > > + strcat(name, ":p");
> > > > + new_event->name = name;
> > >
> > > For such cases we use asprintf(), that allocates and formats the string
> > > in one call, look for examples in other tools/perf/ files.
> > >
> > > > *tpebs_event_size += 1;
> > > > pr_debug("retire_latency required, tpebs_event_size=%lu, new_event=%s\n",
> > > > *tpebs_event_size, new_event->name);
> > > > diff --git a/tools/perf/util/metricgroup.h b/tools/perf/util/metricgroup.h
> > > > index 7c24ed768ff3..1fa12cc3294e 100644
> > > > --- a/tools/perf/util/metricgroup.h
> > > > +++ b/tools/perf/util/metricgroup.h
> > > > @@ -71,6 +71,13 @@ struct tpebs_event {
> > > > const char *name;
> > > > const char *tpebs_name;
> > > > };
> > > > +struct tpebs_retire_lat {
> > > > + struct list_head nd;
> > > > + const char *name;
> > > > + const char *tpebs_name;
> > > > + size_t count;
> > > > + int sum;
> > > > +};
> > >
> > > Here you declare the constructor and destructor I suggested above.
> > >
> > > > struct metric_event *metricgroup__lookup(struct rblist *metric_events,
> > > > struct evsel *evsel,
> > > > diff --git a/tools/perf/util/stat.h b/tools/perf/util/stat.h
> > > > index 9d0186316b29..8e180f13aa2d 100644
> > > > --- a/tools/perf/util/stat.h
> > > > +++ b/tools/perf/util/stat.h
> > > > @@ -111,6 +111,9 @@ struct perf_stat_config {
> > > > struct rblist metric_events;
> > > > struct list_head tpebs_events;
> > > > size_t tpebs_event_size;
> > > > + struct list_head tpebs_results;
> > > > + pid_t tpebs_pid;
> > > > + int tpebs_pipe;
> > > > int ctl_fd;
> > > > int ctl_fd_ack;
> > > > bool ctl_fd_close;
> > > > --
> > > > 2.43.0
> > > >