Re: [PATCH 4/5] perf_events: add cgroup support (v7)

From: Stephane Eranian
Date: Wed Jan 05 2011 - 08:01:56 EST


On Wed, Jan 5, 2011 at 12:23 PM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> On Mon, 2011-01-03 at 18:20 +0200, Stephane Eranian wrote:
>
>> +#ifdef CONFIG_CGROUP_PERF
>> +/*
>> + * perf_cgroup_info keeps track of time_enabled for a cgroup.
>> + * This is a per-cpu dynamically allocated data structure.
>> + */
>> +struct perf_cgroup_info {
>> + Â Â u64 time;
>> + Â Â u64 timestamp;
>> +};
>> +
>> +struct perf_cgroup {
>> + Â Â struct cgroup_subsys_state css;
>> + Â Â struct perf_cgroup_info *info; Â/* timing info, one per cpu */
>
> I think 'they' want a __percpu annotation there.
>
Ok, I can fix this.

>> +};
>> +#endif
>
>
>> diff --git a/kernel/perf_event.c b/kernel/perf_event.c
>> index b782b7a..905b91a 100644
>> --- a/kernel/perf_event.c
>> +++ b/kernel/perf_event.c
>
>
>> +static inline void __update_cgrp_time(struct perf_cgroup *cgrp)
>> +{
>> + Â Â struct perf_cgroup_info *t;
>> + Â Â u64 now;
>> +
>> + Â Â now = perf_clock();
>> +
>> + Â Â t = per_cpu_ptr(cgrp->info, smp_processor_id());
>
> Âthis_cpu_ptr(cgrp->info);
>
Fixed.

>> +
>> + Â Â t->time += now - t->timestamp;
>> + Â Â t->timestamp = now;
>> +}
>
>
>> +static inline void
>> +perf_cgroup_set_timestamp(struct task_struct *task, u64 now)
>> +{
>> + Â Â struct perf_cgroup *cgrp;
>> + Â Â struct perf_cgroup_info *info;
>> +
>> + Â Â if (!task)
>> + Â Â Â Â Â Â return;
>> +
>> + Â Â cgrp = perf_cgroup_from_task(task);
>> + Â Â info = per_cpu_ptr(cgrp->info, smp_processor_id());
>
> Âthis_cpu_ptr();
>
Fixed.

>> + Â Â info->timestamp = now;
>> +}
>> +
>
>> +/*
>> + * called from perf_event_ask_sched_out() conditional to jump label
>> + */
>> +void
>> +perf_cgroup_switch(struct task_struct *task, struct task_struct *next)
>> +{
>> + Â Â struct perf_cgroup *cgrp_out = perf_cgroup_from_task(task);
>> + Â Â struct perf_cgroup *cgrp_in = perf_cgroup_from_task(next);
>> + Â Â struct perf_cpu_context *cpuctx;
>> + Â Â struct pmu *pmu;
>> + Â Â /*
>> + Â Â Â* if task is DEAD, then css_out is irrelevant, it has
>> + Â Â Â* been changed to init_cgrp in cgroup_exit() from do_exit().
>> + Â Â Â* Furthermore, perf_cgroup_exit_task(), has scheduled out
>> + Â Â Â* all css constrained events, only unconstrained events
>> + Â Â Â* remain. Therefore we need to reschedule based on css_in.
>> + Â Â Â*/
>> + Â Â if (task->state != TASK_DEAD && cgrp_out == cgrp_in)
>> + Â Â Â Â Â Â return;
>
> I think that check is broken, TASK_DEAD is set way after calling
> cgroup_exit(), so if we get preempted in between there you'll still go
> funny.
>
> We do set PF_EXITING before calling cgroup_exit() though.
>
Switched to PF_EXITING. Thanks for catching this one.

>> + Â Â rcu_read_lock();
>> +
>> + Â Â list_for_each_entry_rcu(pmu, &pmus, entry) {
>> +
>> + Â Â Â Â Â Â cpuctx = this_cpu_ptr(pmu->pmu_cpu_context);
>> +
>> + Â Â Â Â Â Â perf_pmu_disable(cpuctx->ctx.pmu);
>> +
>> + Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â* perf_cgroup_events says at least one
>> + Â Â Â Â Â Â Â* context on this CPU has cgroup events.
>> + Â Â Â Â Â Â Â*
>> + Â Â Â Â Â Â Â* ctx->nr_cgroups reports the number of cgroup
>> + Â Â Â Â Â Â Â* events for a context. Given there can be multiple
>> + Â Â Â Â Â Â Â* PMUs, there can be multiple contexts.
>> + Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â if (cpuctx->ctx.nr_cgroups > 0) {
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* schedule out everything we have
>> + Â Â Â Â Â Â Â Â Â Â Â* task == DEAD: only unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â* task != DEAD: css constrained + unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â*
>
> Does this comment want an update? As per the above (broken) check, we
> should never get here for DEAD tasks, hmm?
>
Updated.

>> + Â Â Â Â Â Â Â Â Â Â Â* We kick out all events (even if unconstrained)
>> + Â Â Â Â Â Â Â Â Â Â Â* to allow the constrained events to be scheduled
>> + Â Â Â Â Â Â Â Â Â Â Â* based on their position in the event list (fairness)
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_out(cpuctx, EVENT_ALL);
>> + Â Â Â Â Â Â Â Â Â Â /*
>> + Â Â Â Â Â Â Â Â Â Â Â* reschedule css_in constrained + unconstrained events
>> + Â Â Â Â Â Â Â Â Â Â Â*/
>> + Â Â Â Â Â Â Â Â Â Â cpu_ctx_sched_in(cpuctx, EVENT_ALL, next, 1);
>> + Â Â Â Â Â Â }
>> +
>> + Â Â Â Â Â Â perf_pmu_enable(cpuctx->ctx.pmu);
>> + Â Â }
>> +
>> + Â Â rcu_read_unlock();
>> +}
>> +
>> +static inline void
>
> Copy/paste fail?
>
>> +perf_cgroup_exit_task(struct task_struct *task)
>> +{
>
>> +}
>> +
Dropped this one which is useless.

>
> +static inline int perf_cgroup_connect(int fd, struct perf_event *event,
>> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct perf_event_attr *attr,
>> + Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â struct perf_event *group_leader)
>
> Again, do we really need this 'inline' ?
>
Not really.

>> +{
>> + Â Â struct perf_cgroup *cgrp;
>> + Â Â struct cgroup_subsys_state *css;
>> + Â Â struct file *file;
>> + Â Â int ret = 0, fput_needed;
>> +
>> + Â Â file = fget_light(fd, &fput_needed);
>> + Â Â if (!file)
>> + Â Â Â Â Â Â return -EBADF;
>> +
>> + Â Â css = cgroup_css_from_dir(file, perf_subsys_id);
>> + Â Â if (IS_ERR(css))
>> + Â Â Â Â Â Â return PTR_ERR(css);
>> +
>> + Â Â cgrp = container_of(css, struct perf_cgroup, css);
>> + Â Â event->cgrp = cgrp;
>
>
> If we do that perf_get_cgroup() here (unconditional).
>
That is exactly what I did in my revised patch.

>> + Â Â /*
>> + Â Â Â* all events in a group must monitor
>> + Â Â Â* the same cgroup because a thread belongs
>> + Â Â Â* to only one perf cgroup at a time
>> + Â Â Â*/
>> + Â Â if (group_leader && group_leader->cgrp != cgrp) {
>> + Â Â Â Â Â Â perf_detach_cgroup(event);
>> + Â Â Â Â Â Â ret = -EINVAL;
>> + Â Â } else {
>> + Â Â Â Â Â Â /* must be done before we fput() the file */
>> + Â Â Â Â Â Â perf_get_cgroup(event);
>> + Â Â }
>
> Then you can have that conditional perf_detach_cgroup() here, right?

Thanks.
Will resubmit just this patch.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/