Re: [perf stat] Extend --cpu to non-system-wide runs too? was Re: [PATCH v3] perf bench sched pipe: Add -G/--cgroups option

From: Arnaldo Carvalho de Melo
Date: Tue Oct 17 2023 - 14:31:37 EST


Em Tue, Oct 17, 2023 at 02:43:45PM +0200, Ingo Molnar escreveu:
> * Arnaldo Carvalho de Melo <acme@xxxxxxxxxx> wrote:
> > Em Tue, Oct 17, 2023 at 01:40:07PM +0200, Ingo Molnar escreveu:
> > > Side note: it might make sense to add a sane cpumask/affinity setting
> > > option to perf stat itself:

> > > perf stat --cpumask

> > > ... or so?

> > > We do have -C:

> > > -C, --cpu <cpu> list of cpus to monitor in system-wide

> > > ... but that's limited to --all-cpus, right?

> > > Perhaps we could extend --cpu to non-system-wide runs too?

> > Maybe I misunderstood your question, but its a list of cpus to limit the
> > counting:

> Ok.

> So I thought that "--cpumask mask/list/etc" should simply do what 'taskset'
> is doing: using the sched_setaffinity() syscall to make the current
> workload and all its children.

> There's impact on perf stat itself: it could just call sched_setaffinity()
> early on, and not bother about it?

> Having it built-in into perf would simply make it easier to not forget
> running 'taskset'. :-)

Would that be the only advantage?

I think using taskset isn't that much of a burden and keeps with the
Unix tradition, no? :-\

See, using 'perf record -C', i.e. sampling, will use sched_setaffinity,
and in that case there is a clear advantage... wait, this train of
thought made me remember something, but its just about counter setup,
not about the workload:

[acme@five perf-tools-next]$ grep affinity__set tools/perf/*.c
tools/perf/builtin-stat.c: else if (affinity__setup(&saved_affinity) < 0)
tools/perf/builtin-stat.c: if (affinity__setup(&saved_affinity) < 0)
[acme@five perf-tools-next]$

/*
* perf_event_open does an IPI internally to the target CPU.
* It is more efficient to change perf's affinity to the target
* CPU and then set up all events on that CPU, so we amortize
* CPU communication.
*/
void affinity__set(struct affinity *a, int cpu)

[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -e cycles -a sleep 1

Performance counter stats for 'system wide':

6,319,186,681 cycles

1.002665795 seconds time elapsed


Summary of events:

perf (24307), 396 events, 87.4%

syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 198 0 4.544 0.006 0.023 0.042 2.30%


[root@five ~]#

[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1 -e cycles -a sleep 1

Performance counter stats for 'system wide':

105,311,506 cycles

1.001203282 seconds time elapsed


Summary of events:

perf (24633), 24 events, 29.6%

syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 12 0 0.105 0.005 0.009 0.039 32.07%


[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2 -e cycles -a sleep 1

Performance counter stats for 'system wide':

131,474,375 cycles

1.001324346 seconds time elapsed


Summary of events:

perf (24636), 36 events, 38.7%

syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 18 0 0.442 0.000 0.025 0.093 24.75%


[root@five ~]# perf trace --summary -e sched_setaffinity perf stat -C 1,2,30 -e cycles -a sleep 1

Performance counter stats for 'system wide':

191,674,889 cycles

1.001280015 seconds time elapsed


Summary of events:

perf (24639), 48 events, 45.7%

syscall calls errors total min avg max stddev
(msec) (msec) (msec) (msec) (%)
--------------- -------- ------ -------- --------- --------- --------- ------
sched_setaffinity 24 0 0.835 0.000 0.035 0.144 24.40%


[root@five ~]#

Too much affinity setting :-)


- Arnaldo