Re: [PATCH v2 2/2] perf test: Add event group test

From: Liang, Kan
Date: Thu Dec 01 2022 - 09:02:34 EST




On 2022-12-01 4:13 a.m., Ravi Bangoria wrote:
> Hi Kan,
>
> Thanks for the review.
>
>>> +static int setup_uncore_event(void)
>>> +{
>>> + struct perf_pmu *pmu;
>>> +
>>> + if (list_empty(&pmus))
>>> + perf_pmu__scan(NULL);
>>> +
>>> + perf_pmus__for_each_pmu(pmu) {
>>> + if (pmu->is_uncore) {
>>
>> Always using the first uncore PMU may trigger false alarm on some Intel
>> platforms. For example, Intel has free running uncore PMUs (e.g.,
>> uncore_imc_free_running_0), which only supports special event encoding
>> 0xff. The config 0 must fails.
>> You may want to add the below check to ignore the free running uncore PMUs.
>> if (strstr(pmu->name, "free_running"))
>> continue;
>>
>>
>> Also, some uncore PMUs only support two counters. But the test assumes
>> that the number of counters > 2. You may want to limit the size of the
>> group for 2 for a pure uncore group.
>
> That seems hacky. Instead of ignoring, would it be possible to provide
> a list of testable pmus? Example with random values:
>
> /* Uncore pmus that support more than 3 counters */
> static struct uncore_pmus {
> char *name;
> unsigned long config;
> } uncore_pmus[] = {
> { "amd_l3", 0x0 },
> { "amd_df", 0x0 },
> { "uncore_imc_xxx", 0xff }, /* Intel */

IMC seems a safe choice. AFAIK, we should have at least uncore_imc_0 for
all the existing Intel platforms. { "uncore_imc_0", 0x1 }

> { "intel_xxx_pmu2", 0xff }, /* Intel */

Intel doesn't have such uncore PMUs.

> { "abc_pmu1", 0x0 }, /* Arm */
> { "hv_24x7", 0xa }, /* PowerPC */
> { ... },
> };
>
> perf_pmus__for_each_pmu(pmu) {
> if (pmu present in uncore_pmus[])
> type[2] = pmu->type;
> config[2] = pmu->config;> }


Not sure the uncore_pmus[] can cover all possible names for all
architectures.

Maybe we should fall back to the first uncore PMU and try again if
nothing match the uncore_pmus[].

Thanks,
Kan
>
> Ofcourse, this should work for all architectures. Arm, PowerPC, s390 folks?
>
> Thanks,
> Ravi