Re: [BUGFIX][PATCH v3] memcg: fix behavior of per cpu charge cachedraining.

From: Daisuke Nishimura
Date: Wed Jun 08 2011 - 21:42:23 EST


On Thu, 9 Jun 2011 09:30:45 +0900
KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx> wrote:

> From 0ebd8a90a91d50c512e7c63e5529a22e44e84c42 Mon Sep 17 00:00:00 2001
> From: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
> Date: Wed, 8 Jun 2011 13:51:11 +0900
> Subject: [PATCH] Fix behavior of per-cpu charge cache draining in memcg.
>
> For performance, memory cgroup caches some "charge" from res_counter
> into per cpu cache. This works well but because it's cache,
> it needs to be flushed in some cases. Typical cases are
> 1. when someone hit limit.
> 2. when rmdir() is called and need to charges to be 0.
>
> But "1" has problem.
>
> Recently, with large SMP machines, we many kworker runs because
> of flushing memcg's cache. Bad things in implementation are
>
> a) it's called before calling try_to_free_mem_cgroup_pages()
> so, it's called immidiately when a task hit limit.
> (I though it was better to avoid to run into memory reclaim.
> But it was wrong decision.)
>
> b) Even if a cpu contains a cache for memcg not related to
> a memcg which hits limit, drain code is called.
>
> This patch fixes a) and b) by
>
> A) delay calling of flushing until one run of try_to_free...
> Then, the number of calling is decreased.
> B) check percpu cache contains a useful data or not.
> plus
> C) check asynchronous percpu draining doesn't run.
>
> BTW, why this patch relpaces atomic_t counter with mutex is
> to guarantee a memcg which is pointed by stock->cacne is
> not destroyed while we check css_id.
>
> Reported-by: Ying Han <yinghan@xxxxxxxxxx>
> Reviewed-by: Michal Hocko <mhocko@xxxxxxx>
> Signed-off-by: KAMEZAWA Hiroyuki <kamezawa.hiroyu@xxxxxxxxxxxxxx>
>
Acked-by: Daisuke Nishimura <nishimura@xxxxxxxxxxxxxxxxx>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/