[PATCH v5 0/4] mm: Rework zap ptes on swap entries

From: Peter Xu
Date: Thu Feb 17 2022 - 01:08:09 EST


v5:
- Patch 1:
- A few comment fixups in patch 1 [John]
- Tweak commit message (s/user/caller/) [Andrew]
- Add r-bs for John
- Reindent commit messages to 75 columns [John]

RFC V1: https://lore.kernel.org/lkml/20211110082952.19266-1-peterx@xxxxxxxxxx
RFC V2: https://lore.kernel.org/lkml/20211115134951.85286-1-peterx@xxxxxxxxxx
V3: https://lore.kernel.org/lkml/20220128045412.18695-1-peterx@xxxxxxxxxx
V4: https://lore.kernel.org/lkml/20220216094810.60572-1-peterx@xxxxxxxxxx

Patch 1 should fix a long standing bug for zap_pte_range() on zap_details
usage. The risk is we could have some swap entries skipped while we should
have zapped them.

Migration entries are not the major concern because file backed memory always
zap in the pattern that "first time without page lock, then re-zap with page
lock" hence the 2nd zap will always make sure all migration entries are already
recovered.

However there can be issues with real swap entries got skipped errornoously.
There's a reproducer provided in commit message of patch 1 for that.

Patch 2-4 are cleanups that are based on patch 1. After the whole patchset
applied, we should have a very clean view of zap_pte_range().

Only patch 1 needs to be backported to stable if necessary.

Please review, thanks.

Peter Xu (4):
mm: Don't skip swap entry even if zap_details specified
mm: Rename zap_skip_check_mapping() to should_zap_page()
mm: Change zap_details.zap_mapping into even_cows
mm: Rework swap handling of zap_pte_range

mm/memory.c | 80 ++++++++++++++++++++++++++++++-----------------------
1 file changed, 45 insertions(+), 35 deletions(-)

--
2.32.0