Re: [PATCH 5/6] mm, memcg: Prevent memory.swap.max load tearing

From: Michal Hocko
Date: Mon Mar 16 2020 - 10:59:04 EST


On Thu 12-03-20 17:33:11, Chris Down wrote:
> The write side of this is xchg()/smp_mb(), so that's all good. Just a
> few sites missing a READ_ONCE.
>
> Signed-off-by: Chris Down <chris@xxxxxxxxxxxxxx>
> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>
> Cc: Johannes Weiner <hannes@xxxxxxxxxxx>
> Cc: Roman Gushchin <guro@xxxxxx>
> Cc: Tejun Heo <tj@xxxxxxxxxx>
> Cc: linux-mm@xxxxxxxxx
> Cc: cgroups@xxxxxxxxxxxxxxx
> Cc: linux-kernel@xxxxxxxxxxxxxxx
> Cc: kernel-team@xxxxxx

Acked-by: Michal Hocko <mhocko@xxxxxxxx>

> ---
> mm/memcontrol.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index e0ed790a2a8c..57048a38c75d 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1511,7 +1511,7 @@ void mem_cgroup_print_oom_meminfo(struct mem_cgroup *memcg)
> if (cgroup_subsys_on_dfl(memory_cgrp_subsys))
> pr_info("swap: usage %llukB, limit %llukB, failcnt %lu\n",
> K((u64)page_counter_read(&memcg->swap)),
> - K((u64)memcg->swap.max), memcg->swap.failcnt);
> + K((u64)READ_ONCE(memcg->swap.max)), memcg->swap.failcnt);
> else {
> pr_info("memory+swap: usage %llukB, limit %llukB, failcnt %lu\n",
> K((u64)page_counter_read(&memcg->memsw)),
> @@ -1544,7 +1544,7 @@ unsigned long mem_cgroup_get_max(struct mem_cgroup *memcg)
> unsigned long swap_max;
>
> memsw_max = memcg->memsw.max;
> - swap_max = memcg->swap.max;
> + swap_max = READ_ONCE(memcg->swap.max);
> swap_max = min(swap_max, (unsigned long)total_swap_pages);
> max = min(max + swap_max, memsw_max);
> }
> @@ -7025,7 +7025,8 @@ bool mem_cgroup_swap_full(struct page *page)
> return false;
>
> for (; memcg != root_mem_cgroup; memcg = parent_mem_cgroup(memcg))
> - if (page_counter_read(&memcg->swap) * 2 >= memcg->swap.max)
> + if (page_counter_read(&memcg->swap) * 2 >=
> + READ_ONCE(memcg->swap.max))
> return true;
>
> return false;
> --
> 2.25.1
>

--
Michal Hocko
SUSE Labs