Re: [PATCH 8/8] memcg: get rid of mem_cgroup_from_task

From: Michal Hocko
Date: Thu Jul 09 2015 - 12:33:50 EST


On Thu 09-07-15 17:32:47, Vladimir Davydov wrote:
> On Thu, Jul 09, 2015 at 04:13:21PM +0200, Michal Hocko wrote:
> > On Wed 08-07-15 20:43:31, Vladimir Davydov wrote:
> > > On Wed, Jul 08, 2015 at 02:27:52PM +0200, Michal Hocko wrote:
> > [...]
> > > > @@ -1091,12 +1079,14 @@ bool task_in_mem_cgroup(struct task_struct *task, struct mem_cgroup *memcg)
> > > > task_unlock(p);
> > > > } else {
> > > > /*
> > > > - * All threads may have already detached their mm's, but the oom
> > > > - * killer still needs to detect if they have already been oom
> > > > - * killed to prevent needlessly killing additional tasks.
> > > > + * All threads have already detached their mm's but we should
> > > > + * still be able to at least guess the original memcg from the
> > > > + * task_css. These two will match most of the time but there are
> > > > + * corner cases where task->mm and task_css refer to a different
> > > > + * cgroups.
> > > > */
> > > > rcu_read_lock();
> > > > - task_memcg = mem_cgroup_from_task(task);
> > > > + task_memcg = mem_cgroup_from_css(task_css(task, memory_cgrp_id));
> > > > css_get(&task_memcg->css);
> > >
> > > I wonder why it's safe to call css_get here.
> >
> > What do you mean by safe? Memcg cannot go away because we are under rcu
> > lock.
>
> No, it can't, but css->refcnt can reach zero while we are here, can't
> it? If it happens, css->refcnt.release will be called twice, which will
> have very bad consequences. I think it's OK to call css_tryget{_online}
> from an RCU read-side section, but not css_get. Am I missing something?

OK, now I see what you mean. This is a good question indeed. This code has been
like that for quite a while and I took it for granted. I have to think
about it some more. Anyway the patch doesn't change the behavior here.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/