Re: [PATCH 1/2 v2] memcg: make oom_lock 0 and 1 based rather thancoutner

From: Johannes Weiner
Date: Tue Aug 09 2011 - 10:04:03 EST


On Wed, Jul 13, 2011 at 01:05:49PM +0200, Michal Hocko wrote:
> @@ -1803,37 +1806,83 @@ static int mem_cgroup_hierarchical_reclaim(struct mem_cgroup *root_mem,
> /*
> * Check OOM-Killer is already running under our hierarchy.
> * If someone is running, return false.
> + * Has to be called with memcg_oom_mutex
> */
> static bool mem_cgroup_oom_lock(struct mem_cgroup *mem)
> {
> - int x, lock_count = 0;
> - struct mem_cgroup *iter;
> + int lock_count = -1;
> + struct mem_cgroup *iter, *failed = NULL;
> + bool cond = true;
>
> - for_each_mem_cgroup_tree(iter, mem) {
> - x = atomic_inc_return(&iter->oom_lock);
> - lock_count = max(x, lock_count);
> + for_each_mem_cgroup_tree_cond(iter, mem, cond) {
> + bool locked = iter->oom_lock;
> +
> + iter->oom_lock = true;
> + if (lock_count == -1)
> + lock_count = iter->oom_lock;
> + else if (lock_count != locked) {
> + /*
> + * this subtree of our hierarchy is already locked
> + * so we cannot give a lock.
> + */
> + lock_count = 0;
> + failed = iter;
> + cond = false;
> + }

I noticed system-wide hangs during a parallel/hierarchical memcg test
and found that a single task with a central i_mutex held was sleeping
on the memcg oom waitqueue, stalling everyone else contending for that
same inode.

The problem is the above code, which never succeeds in hierarchies
with more than one member. The first task going OOM tries to oom lock
the hierarchy, fails, goes to sleep on the OOM waitqueue with the
mutex held, without anybody actually OOM killing anything to make
progress.

Here is a patch that rectified things for me.

---