[PATCH] Revert "mm:vmscan: fix inaccurate reclaim during proactive reclaim"

From: T.J. Mercier
Date: Sun Jan 21 2024 - 16:44:43 EST


This reverts commit 0388536ac29104a478c79b3869541524caec28eb.

Proactive reclaim on the root cgroup is 10x slower after this patch when
MGLRU is enabled, and completion times for proactive reclaim on much
smaller non-root cgroups take ~30% longer (with or without MGLRU). With
root reclaim before the patch, I observe average reclaim rates of
~70k pages/sec before try_to_free_mem_cgroup_pages starts to fail and
the nr_retries counter starts to decrement, eventually ending the
proactive reclaim attempt. After the patch the reclaim rate is
consistently ~6.6k pages/sec due to the reduced nr_pages value causing
scan aborts as soon as SWAP_CLUSTER_MAX pages are reclaimed. The
proactive reclaim doesn't complete after several minutes because
try_to_free_mem_cgroup_pages is still capable of reclaiming pages in
tiny SWAP_CLUSTER_MAX page chunks and nr_retries is never decremented.

The docs for memory.reclaim say, "the kernel can over or under reclaim
from the target cgroup" which this patch was trying to fix. Revert it
until a less costly solution is found.

Signed-off-by: T.J. Mercier <tjmercier@xxxxxxxxxx>
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index e4c8735e7c85..cee536c97151 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6956,8 +6956,8 @@ static ssize_t memory_reclaim(struct kernfs_open_file *of, char *buf,
lru_add_drain_all();

reclaimed = try_to_free_mem_cgroup_pages(memcg,
- min(nr_to_reclaim - nr_reclaimed, SWAP_CLUSTER_MAX),
- GFP_KERNEL, reclaim_options);
+ nr_to_reclaim - nr_reclaimed,
+ GFP_KERNEL, reclaim_options);

if (!reclaimed && !nr_retries--)
return -EAGAIN;
--
2.43.0.429.g432eaa2c6b-goog