Re: [RFC PATCH 2/2] tools/perf: Make group_fd static and move itsplace in __perf_evsel__open()

From: Arnaldo Carvalho de Melo
Date: Wed Oct 26 2011 - 07:21:59 EST


Em Wed, Oct 26, 2011 at 09:18:28AM -0200, Arnaldo Carvalho de Melo escreveu:
> Em Wed, Oct 26, 2011 at 02:26:03PM +0800, Deng-Cheng Zhu escreveu:
> > With _only_ your patch applied, I tested with the following commands on
> > MIPS 74K (4 counters available in total):

> > perf stat -g -e
> > L1-dcache-load-misses,cycles,LLC-load-misses,iTLB-loads,instructions
> > find / >/dev/null

> > I tried to group up to 5 events in the hope of seeing NOSPC error. But
> > the command didn't fail and output:

> > Performance counter stats for 'find /':

> > 9300823 L1-dcache-load-misses
> > <not counted> cycles
> > <not counted> LLC-load-misses
> > <not counted> iTLB-loads
> > <not counted> instructions

> > 8.463207591 seconds time elapsed

> > This is due to the event state check in validate_group() filtering out
> > the grouped events in OFF state. They are in OFF state because we are
> > running the command with the perf tool as opposed to attaching to an
> > existing task:

> > builtin-stat.c:create_perf_stat_counter():

> > if (target_pid == -1 && target_tid == -1) {
> > attr->disabled = 1;
> > attr->enable_on_exec = 1;
> > }
>
> But they should be in OFF state only till the target program gets
> exec'ed, right?
>
> > I suppose X86 has this issue too -- collect_events() in validate_group()
> > won't do real work in the bottom half of the function.
>
> I'm testing that now.

[root@emilia ~]# perf stat -g -e L1-dcache-load-misses,cycles,LLC-load-misses,iTLB-loads,instructions find / >/dev/null

Performance counter stats for 'find /':

158,807,949 L1-dcache-load-misses
<not counted> cycles
<not counted> LLC-load-misses
<not counted> iTLB-loads
<not counted> instructions

100.676645958 seconds time elapsed

[root@emilia ~]#

Yeah, reading more code...

- Arnaldo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/