Re: [PATCH v6 0/4] Add perf interface to expose nvdimm

From: Dan Williams
Date: Wed Feb 23 2022 - 16:17:25 EST


On Wed, Feb 23, 2022 at 11:07 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
>
> On Fri, Feb 18, 2022 at 10:06 AM Dan Williams <dan.j.williams@xxxxxxxxx> wrote:
> >
> > On Thu, Feb 17, 2022 at 8:34 AM Kajol Jain <kjain@xxxxxxxxxxxxx> wrote:
> > >
> > > Patchset adds performance stats reporting support for nvdimm.
> > > Added interface includes support for pmu register/unregister
> > > functions. A structure is added called nvdimm_pmu to be used for
> > > adding arch/platform specific data such as cpumask, nvdimm device
> > > pointer and pmu event functions like event_init/add/read/del.
> > > User could use the standard perf tool to access perf events
> > > exposed via pmu.
> > >
> > > Interface also defines supported event list, config fields for the
> > > event attributes and their corresponding bit values which are exported
> > > via sysfs. Patch 3 exposes IBM pseries platform nmem* device
> > > performance stats using this interface.
> > >
> > > Result from power9 pseries lpar with 2 nvdimm device:
> > >
> > > Ex: List all event by perf list
> > >
> > > command:# perf list nmem
> > >
> > > nmem0/cache_rh_cnt/ [Kernel PMU event]
> > > nmem0/cache_wh_cnt/ [Kernel PMU event]
> > > nmem0/cri_res_util/ [Kernel PMU event]
> > > nmem0/ctl_res_cnt/ [Kernel PMU event]
> > > nmem0/ctl_res_tm/ [Kernel PMU event]
> > > nmem0/fast_w_cnt/ [Kernel PMU event]
> > > nmem0/host_l_cnt/ [Kernel PMU event]
> > > nmem0/host_l_dur/ [Kernel PMU event]
> > > nmem0/host_s_cnt/ [Kernel PMU event]
> > > nmem0/host_s_dur/ [Kernel PMU event]
> > > nmem0/med_r_cnt/ [Kernel PMU event]
> > > nmem0/med_r_dur/ [Kernel PMU event]
> > > nmem0/med_w_cnt/ [Kernel PMU event]
> > > nmem0/med_w_dur/ [Kernel PMU event]
> > > nmem0/mem_life/ [Kernel PMU event]
> > > nmem0/poweron_secs/ [Kernel PMU event]
> > > ...
> > > nmem1/mem_life/ [Kernel PMU event]
> > > nmem1/poweron_secs/ [Kernel PMU event]
> > >
> > > Patch1:
> > > Introduces the nvdimm_pmu structure
> > > Patch2:
> > > Adds common interface to add arch/platform specific data
> > > includes nvdimm device pointer, pmu data along with
> > > pmu event functions. It also defines supported event list
> > > and adds attribute groups for format, events and cpumask.
> > > It also adds code for cpu hotplug support.
> > > Patch3:
> > > Add code in arch/powerpc/platform/pseries/papr_scm.c to expose
> > > nmem* pmu. It fills in the nvdimm_pmu structure with pmu name,
> > > capabilities, cpumask and event functions and then registers
> > > the pmu by adding callbacks to register_nvdimm_pmu.
> > > Patch4:
> > > Sysfs documentation patch
> > >
> > > Changelog
> > > ---
> > > Resend v5 -> v6
> > > - No logic change, just a rebase to latest upstream and
> > > tested the patchset.
> > >
> > > - Link to the patchset Resend v5: https://lkml.org/lkml/2021/11/15/3979
> > >
> > > v5 -> Resend v5
> > > - Resend the patchset
> > >
> > > - Link to the patchset v5: https://lkml.org/lkml/2021/9/28/643
> > >
> > > v4 -> v5:
> > > - Remove multiple variables defined in nvdimm_pmu structure include
> > > name and pmu functions(event_int/add/del/read) as they are just
> > > used to copy them again in pmu variable. Now we are directly doing
> > > this step in arch specific code as suggested by Dan Williams.
> > >
> > > - Remove attribute group field from nvdimm pmu structure and
> > > defined these attribute groups in common interface which
> > > includes format, event list along with cpumask as suggested by
> > > Dan Williams.
> > > Since we added static defination for attrbute groups needed in
> > > common interface, removes corresponding code from papr.
> > >
> > > - Add nvdimm pmu event list with event codes in the common interface.
> > >
> > > - Remove Acked-by/Reviewed-by/Tested-by tags as code is refactored
> > > to handle review comments from Dan.
> >
> > I don't think review comments should invalidate the Acked-by tags in
> > this case. Nothing fundamentally changed in the approach, and I would
> > like to have the perf ack before taking this through the nvdimm tree.
> >
> > Otherwise this looks good to me.
> >
> > Peter, might you have a chance to re-Ack this series, or any concerns
> > about me retrieving those Acks from the previous postings?
>
> Reached Peter offline and he refreshed his Acked-by.

There's still time for the tags from:

"Madhavan Srinivasan"
"Nageswara R Sastry"

...to be reapplied, but I'll go ahead with pushing this to Linux-next
in the meantime.