[PATCH v2] mm:vmscan: shrink skip folio mapped by an exiting task

From: Zhiguo Jiang
Date: Wed Jan 31 2024 - 08:13:08 EST


If the folio shrinked by shrink_inactive_list is mapped by an exiting
task, this folio should be freed in the task exit flow rather than be
reclaimed in the shrink flow, because the former takes less time.

When the exiting tasks and shrink_inactive_list occur at the same time,
the lruvecs's folios which shrink_inactive_list reclaims may be mapped
by the exiting tasks. And when system is low memory, it more likely to
occur, because more backend applidatuions will be killed.

The shrink_inactive_list reclaims the exiting tasks's folios in lruvecs
and transforms the exiting tasks's anon folios into swap memory, which
will lead to the increasing load of the current exiting tasks.

This patch can alleviate the load of the tasks exiting process. Because
it can make that the exiting tasks release its anon folios faster
instead of releasing its swap memory from its anon folios swap-in in
shrink_inactive_list.

Signed-off-by: Zhiguo Jiang <justinjiang@xxxxxxxx>
---

Change log:
v1->v2:
1.The VM_EXITING added in v1 patch is removed, because it will fail
to compile in 32-bit system.

mm/rmap.c | 7 +++++++
1 file changed, 7 insertions(+)
mode change 100644 => 100755 mm/rmap.c

diff --git a/mm/rmap.c b/mm/rmap.c
index 1cf2bffa48ed..e6702bfafdde
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -840,6 +840,13 @@ static bool folio_referenced_one(struct folio *folio,
int referenced = 0;
unsigned long start = address, ptes = 0;

+ /* Skip this folio if it's mapped by an exiting task */
+ if (unlikely(!atomic_read(&vma->vm_mm->mm_users)) ||
+ unlikely(test_bit(MMF_OOM_SKIP, &vma->vm_mm->flags))) {
+ pra->referenced = -1;
+ return false;
+ }
+
while (page_vma_mapped_walk(&pvmw)) {
address = pvmw.address;

--
2.39.0