Re: [PATCH] perf/core: fix RCU issues with cgroup monitoring mode

From: Stephane Eranian
Date: Mon Oct 26 2015 - 15:27:19 EST


On Mon, Oct 19, 2015 at 2:12 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
>
> On Mon, Oct 19, 2015 at 12:58:47AM -0700, Stephane Eranian wrote:
> > Peter,
> >
> > On Sat, Oct 17, 2015 at 2:56 AM, Peter Zijlstra <peterz@xxxxxxxxxxxxx> wrote:
> > >
> > > On Sat, Oct 17, 2015 at 03:28:11AM +0200, Stephane Eranian wrote:
> > > >
> > > > This patch eliminates all known RCU violations detected
> > > > by the RCU checker (PROVE_RCU). The impact code paths
> > > > were all related to cgroup mode monitoring and involved
> > > > access a task's cgrp.
> > >
> > > But were they right? This patch provides no clues.
> > >
> > I am assuming that is the checker detects something suspicious there is likely
> > a problem.
> >
> > Take for instance:
> > perf_cgroup_sched_out()->perf_cgroup_from_task() ->task_subsys_state()
> >
> > That one fires the checker. I think because we are accessing the css
> > state without
> > protection.
> >
> > The other places are similar.
>
> But perf_cgroup_attach()->perf_cgroup_switch() takes ctx->lock().
>
> Therefore; if you hold ctx->lock, the cgroup is pinned.
>
Ok, that one was a bad example because yes, it grabs the ctx lock and the
rcu_lock() already.

But the other path:

__perf_event_task_sched_out() -> perf_cgroup_sched_out() ->
perf_cgroup_switch()
is accessing in perf_cgroup_sched_out() task_subsys_state() without
ctx->lock() or
rcu_read lock. Same thing on the sched_in path.

The one place where we already hold the ctx->lock is
__perf_install_in_context().
Same thing for __perf_event_read() -> update_cgrp_time_from_event().
So yes, we'd need a way to tell this is okay, the cgroup cannot disappear.




>
> And the above sequence very much holds ctx->lock.
>
> Right?
>
> So it looks to me that we should teach perf_cgroup_from_task() about
> ctx->lock or something.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/