Re: [PATCH 4/5] perf_events: add cgroup support (v6)

From: Stephane Eranian
Date: Wed Dec 08 2010 - 19:15:35 EST


On Wed, Dec 1, 2010 at 3:00 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Tue, 2010-11-30 at 19:20 +0200, Stephane Eranian wrote:
>
>> diff --git a/kernel/cgroup.c b/kernel/cgroup.c
>> index 66a416b..1c8bee8 100644
>> --- a/kernel/cgroup.c
>> +++ b/kernel/cgroup.c
>> @@ -4790,6 +4790,29 @@ css_get_next(struct cgroup_subsys *ss, int id,
>> Â Â Â return ret;
>> Â}
>>
>> +/*
>> + * get corresponding css from file open on cgroupfs directory
>> + */
>> +struct cgroup_subsys_state *cgroup_css_from_dir(struct file *f, int id)
>> +{
>> + Â Â struct cgroup *cgrp;
>> + Â Â struct inode *inode;
>> + Â Â struct cgroup_subsys_state *css;
>> +
>> + Â Â inode = f->f_dentry->d_inode;
>> + Â Â /* check in cgroup filesystem dir */
>> + Â Â if (inode->i_op != &cgroup_dir_inode_operations)
>> + Â Â Â Â Â Â return ERR_PTR(-EBADF);
>> +
>> + Â Â if (id < 0 || id >= CGROUP_SUBSYS_COUNT)
>> + Â Â Â Â Â Â return ERR_PTR(-EINVAL);
>> +
>> + Â Â /* get cgroup */
>> + Â Â cgrp = __d_cgrp(f->f_dentry);
>> + Â Â css = cgrp->subsys[id];
>> + Â Â return css ? css : ERR_PTR(-ENOENT);
>> +}
>
> Since this paradigm was already in use it surprises me you have to add
> this function.. ?
>

Well, I could not find one. If anybody knows of one, I'll be able to
check it out.

>> +#ifdef CONFIG_PERF_CGROUPS
>> +static inline struct perf_cgroup *
>> +perf_cgroup_from_task(struct task_struct *task)
>> +{
>> + Â Â if (!task)
>> + Â Â Â Â Â Â return NULL;
>> + Â Â return container_of(task_subsys_state(task, perf_subsys_id),
>> + Â Â Â Â Â Â Â Â Â Â struct perf_cgroup, css);
>> +}
>
> Wouldn't it be nicer if the caller ensured to not call it for !task?
>
>
>> +static struct perf_cgroup *perf_get_cgroup(int fd)
>> +{
>> + Â Â struct cgroup_subsys_state *css;
>> + Â Â struct file *file;
>> + Â Â int fput_needed;
>> +
>> + Â Â file = fget_light(fd, &fput_needed);
>> + Â Â if (!file)
>> + Â Â Â Â Â Â return ERR_PTR(-EBADF);
>> +
>> + Â Â css = cgroup_css_from_dir(file, perf_subsys_id);
>> + Â Â if (!IS_ERR(css))
>> + Â Â Â Â Â Â css_get(css);
>> +
>> + Â Â fput_light(file, fput_needed);
>> +
>> + Â Â return container_of(css, struct perf_cgroup, css);
>> +}
>> +
>> +static inline void perf_put_cgroup(struct perf_event *event)
>> +{
>> + Â Â if (event->cgrp)
>> + Â Â Â Â Â Â css_put(&event->cgrp->css);
>> +}
>
> Bit asymmetric, you get a perf_cgroup, but you put a perf_event.
>
Ok, I made this symmetrical now.

>
>> +static inline void __update_css_time(struct perf_cgroup *cgrp)
>> +{
>> + Â Â struct perf_cgroup_info *t;
>> + Â Â u64 now;
>> + Â Â int cpu = smp_processor_id();
>> +
>> + Â Â if (!cgrp)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â now = perf_clock();
>> +
>> + Â Â t = per_cpu_ptr(cgrp->info, cpu);
>> +
>> + Â Â t->time += now - t->timestamp;
>> + Â Â t->timestamp = now;
>> +}
>
> Most callers seem to already check for !cgrp, make that all and avoid
> the second conditional?
>
Done.

>> +/*
>> + * called from perf_event_ask_sched_out() conditional to jump label
>> + */
>> +void
>> +perf_cgroup_switch(struct task_struct *task, struct task_struct *next)
>> +{
>> + Â Â struct perf_cgroup *cgrp_out = perf_cgroup_from_task(task);
>> + Â Â struct perf_cgroup *cgrp_in = perf_cgroup_from_task(next);
>> + Â Â struct perf_cpu_context *cpuctx;
>> + Â Â struct pmu *pmu;
>> + Â Â /*
>> + Â Â Â* if task is DEAD, then css_out is irrelevant, it has
>> + Â Â Â* been changed to init_css in cgroup_exit() from do_exit().
>> + Â Â Â* Furthermore, perf_cgroup_exit_task(), has scheduled out
>> + Â Â Â* all css constrained events, only unconstrained events
>> + Â Â Â* remain. Therefore we need to reschedule based on css_in.
>> + Â Â Â*/
>> + Â Â if (task->state != TASK_DEAD && cgrp_out == cgrp_in)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â rcu_read_lock();
>> +
>> + Â Â list_for_each_entry_rcu(pmu, &pmus, entry) {
>> +
>> + Â Â Â Â Â Â cpuctx = get_cpu_ptr(pmu->pmu_cpu_context);
>> +
>> + Â Â Â Â Â Â perf_pmu_disable(cpuctx->ctx.pmu);
>> +
>> + Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â* perf_cgroup_events says at least one
>> + Â Â Â Â Â Â Â* context on this CPU has cgroup events.
>> + Â Â Â Â Â Â Â*
>> + Â Â Â Â Â Â Â* ctx->nr_cgroups reports the number of cgroup
>> + Â Â Â Â Â Â Â* events for a context. Given there can be multiple
>> + Â Â Â Â Â Â Â* PMUs, there can be multiple contexts.
>> + Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â if (cpuctx->ctx.nr_cgroups > 0) {
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* schedule out everything we have
>> + Â Â Â Â Â Â Â Â Â Â Â* task == DEAD: only unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â* task != DEAD: css constrained + unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â*
>> + Â Â Â Â Â Â Â Â Â Â Â* We kick out all events (even if unconstrained)
>> + Â Â Â Â Â Â Â Â Â Â Â* to allow the constrained events to be scheduled
>> + Â Â Â Â Â Â Â Â Â Â Â* based on their position in the event list (fairness)
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_out(cpuctx, EVENT_ALL);
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* reschedule css_in constrained + unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_in(cpuctx, EVENT_ALL, next, 1);
>> + Â Â Â Â Â Â }
>> +
>> + Â Â Â Â Â Â perf_pmu_enable(cpuctx->ctx.pmu);
>
> Do you leak a preemption count here? No matching put_cpu_ptr().
>
> Since we're in the middle of a context switch, preemption is already
> disabled and it might be best to use this_cpu_ptr() instead of
> get_cpu_ptr(). That avoids the preemption bits.
>
Done.

>> +static inline void
>> +perf_cgroup_exit_task(struct task_struct *task)
>> +{
>> + Â Â struct perf_cpu_context *cpuctx;
>> + Â Â struct pmu *pmu;
>> + Â Â unsigned long flags;
>> +
>> + Â Â local_irq_save(flags);
>> +
>> + Â Â rcu_read_lock();
>> +
>> + Â Â list_for_each_entry_rcu(pmu, &pmus, entry) {
>> +
>> + Â Â Â Â Â Â cpuctx = get_cpu_ptr(pmu->pmu_cpu_context);
>> +
>> + Â Â Â Â Â Â perf_pmu_disable(cpuctx->ctx.pmu);
>> +
>> + Â Â Â Â Â Â if (cpuctx->ctx.nr_cgroups > 0) {
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* task is going to be detached from css.
>> + Â Â Â Â Â Â Â Â Â Â Â* We cannot keep a reference on the css
>> + Â Â Â Â Â Â Â Â Â Â Â* as it may disappear before we get to
>> + Â Â Â Â Â Â Â Â Â Â Â* perf_cgroup_switch(). Thus, we remove
>> + Â Â Â Â Â Â Â Â Â Â Â* all css constrained events.
>> + Â Â Â Â Â Â Â Â Â Â Â*
>> + Â Â Â Â Â Â Â Â Â Â Â* We do this by scheduling out everything
>> + Â Â Â Â Â Â Â Â Â Â Â* we have, and then only rescheduling only
>> + Â Â Â Â Â Â Â Â Â Â Â* the unconstrained events. Those can keep
>> + Â Â Â Â Â Â Â Â Â Â Â* on counting.
>> + Â Â Â Â Â Â Â Â Â Â Â*
>> + Â Â Â Â Â Â Â Â Â Â Â* We re-examine the situation in the final
>> + Â Â Â Â Â Â Â Â Â Â Â* perf_cgroup_switch() call for this task
>> + Â Â Â Â Â Â Â Â Â Â Â* once we know the next task.
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_out(cpuctx, EVENT_ALL);
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* task = NULL causes perf_cgroup_match()
>> + Â Â Â Â Â Â Â Â Â Â Â* to match only unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_in(cpuctx, EVENT_ALL, NULL, 1);
>> + Â Â Â Â Â Â }
>> +
>> + Â Â Â Â Â Â perf_pmu_enable(cpuctx->ctx.pmu);
>
> Another preemption leak?
>
Done.

>
>> @@ -246,6 +581,10 @@ static void update_context_time(struct perf_event_context *ctx)
>> Âstatic u64 perf_event_time(struct perf_event *event)
>> Â{
>> Â Â Â struct perf_event_context *ctx = event->ctx;
>> +
>> + Â Â if (is_cgroup_event(event))
>> + Â Â Â Â Â Â return perf_cgroup_event_css_time(event);
>> +
>> Â Â Â return ctx ? ctx->time : 0;
>> Â}
>>
>> @@ -261,8 +600,10 @@ static void update_event_times(struct perf_event *event)
>> Â Â Â Â Â event->group_leader->state < PERF_EVENT_STATE_INACTIVE)
>> Â Â Â Â Â Â Â return;
>>
>> - Â Â if (ctx->is_active)
>> - Â Â Â Â Â Â run_end = perf_event_time(event);
>> + Â Â if (is_cgroup_event(event))
>> + Â Â Â Â Â Â run_end = perf_cgroup_event_css_time(event);
>> + Â Â else if (ctx->is_active)
>> + Â Â Â Â Â Â run_end = ctx->time;
>> Â Â Â else
>> Â Â Â Â Â Â Â run_end = event->tstamp_stopped;
>
> So I guess the difference is that we want perf_cgroup_event_css_time()
> even when !active?
>
The difference is in the way time_enabled in accounted for in cgroup mode.
time_enabled represent the time the event is enabled AND the monitored
threads were active of the monitored CPU. Thus it is independent of the
state of the context. A context may have cgroup and non-cgroup events
attached to it. I have added a comment to explain that.

>> @@ -322,6 +663,17 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx)
>> Â Â Â Â Â Â Â list_add_tail(&event->group_entry, list);
>> Â Â Â }
>>
>> + Â Â if (is_cgroup_event(event)) {
>> + Â Â Â Â Â Â ctx->nr_cgroups++;
>> + Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â* one more event:
>> + Â Â Â Â Â Â Â* - that has cgroup constraint on event->cpu
>> + Â Â Â Â Â Â Â* - that may need work on context switch
>> + Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â atomic_inc(&per_cpu(perf_cgroup_events, event->cpu));
>> + Â Â Â Â Â Â jump_label_inc(&perf_sched_events);
>> + Â Â }
>
> Ah, I guess this is why you're still using atomics, since another cpu
> can install the counters on the target cpu,. ok I guess that makes
> sense.
>

YES!

>> - Â Â event->shadow_ctx_time = tstamp - ctx->timestamp;
>> + Â Â /*
>> + Â Â Â* use the correct time source for the time snapshot
>> + Â Â Â*
>> + Â Â Â* We could get by without this by leveraging the
>> + Â Â Â* fact that to get to this function, the caller
>> + Â Â Â* has most likely already called update_context_time()
>> + Â Â Â* and update_css_time_xx() and thus both timestamp
>> + Â Â Â* are identical (or very close). Given that tstamp is,
>> + Â Â Â* already adjusted for cgroup, we could say that:
>> + Â Â Â* Â Âtstamp - ctx->timestamp
>> + Â Â Â* is equivalent to
>> + Â Â Â* Â Âtstamp - cgrp->timestamp.
>> + Â Â Â*
>> + Â Â Â* Then, in perf_output_read(), the calculation would
>> + Â Â Â* work with no changes because:
>> + Â Â Â* - event is guaranteed scheduled in
>> + Â Â Â* - no scheduled out in between
>> + Â Â Â* - thus the timestamp would be the same
>> + Â Â Â*
>> + Â Â Â* But this is a bit hairy.
>> + Â Â Â*
>> + Â Â Â* So instead, we have an explicit cgroup call to remain
>> + Â Â Â* within the time time source all along. We believe it
>> + Â Â Â* is cleaner and simpler to understand.
>> + Â Â Â*/
>> + Â Â if (is_cgroup_event(event))
>> + Â Â Â Â Â Â perf_cgroup_set_shadow_time(event, tstamp);
>> + Â Â else
>> + Â Â Â Â Â Â event->shadow_ctx_time = tstamp - ctx->timestamp;
>
> How about we make perf_set_shadow_time() and hide all this in there?
>
>
Done.

>> @@ -5289,6 +5719,7 @@ unlock:
>> Âstatic struct perf_event *
>> Âperf_event_alloc(struct perf_event_attr *attr, int cpu,
>> Â Â Â Â Â Â Â Âstruct task_struct *task,
>> + Â Â Â Â Â Â Âint cgrp_fd, int flags,
>> Â Â Â Â Â Â Â Âstruct perf_event *group_leader,
>> Â Â Â Â Â Â Â Âstruct perf_event *parent_event,
>> Â Â Â Â Â Â Â Âperf_overflow_handler_t overflow_handler)
>> @@ -5302,6 +5733,14 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
>> Â Â Â if (!event)
>> Â Â Â Â Â Â Â return ERR_PTR(-ENOMEM);
>>
>> + Â Â if (flags & PERF_FLAG_PID_CGROUP) {
>> + Â Â Â Â Â Â err = perf_connect_cgroup(cgrp_fd, event, attr, group_leader);
>> + Â Â Â Â Â Â if (err) {
>> + Â Â Â Â Â Â Â Â Â Â kfree(event);
>> + Â Â Â Â Â Â Â Â Â Â return ERR_PTR(err);
>> + Â Â Â Â Â Â }
>> + Â Â }
>> +
>> Â Â Â /*
>> Â Â Â Â* Single events are their own group leaders, with an
>> Â Â Â Â* empty sibling list:
>
>
> Hrm,. that isn't particularly pretty,.. why do we have to do this in
> perf_event_alloc()? Can't we do this in the syscall after
> perf_event_alloc() returns?
>
Done.

Will be posting an updated version soon.
I also realized, I need to check how cgroup is handled for the SW events.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/