[PATCH] mm, trivial: Simplify swap related code in try_to_unmap_one()

From: Huang, Ying
Date: Tue Mar 31 2020 - 04:46:29 EST


From: Huang Ying <ying.huang@xxxxxxxxx>

Because PageSwapCache() will always return false if PageSwapBacked() returns
false, and PageSwapBacked() will be check for MADV_FREE pages in
try_to_unmap_one(). The swap related code in try_to_unmap_one() can be
simplified to improve the readability.

Signed-off-by: "Huang, Ying" <ying.huang@xxxxxxxxx>
Cc: Michal Hocko <mhocko@xxxxxxxxxx>
Cc: Minchan Kim <minchan@xxxxxxxxxx>
Cc: Hugh Dickins <hughd@xxxxxxxxxx>
Cc: Rik van Riel <riel@xxxxxxxxxxx>
---
mm/rmap.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 2126fd4a254b..cd3c406aeac7 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1613,19 +1613,6 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
} else if (PageAnon(page)) {
swp_entry_t entry = { .val = page_private(subpage) };
pte_t swp_pte;
- /*
- * Store the swap location in the pte.
- * See handle_pte_fault() ...
- */
- if (unlikely(PageSwapBacked(page) != PageSwapCache(page))) {
- WARN_ON_ONCE(1);
- ret = false;
- /* We have to invalidate as we cleared the pte */
- mmu_notifier_invalidate_range(mm, address,
- address + PAGE_SIZE);
- page_vma_mapped_walk_done(&pvmw);
- break;
- }

/* MADV_FREE page check */
if (!PageSwapBacked(page)) {
@@ -1648,6 +1635,20 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
break;
}

+ /*
+ * Store the swap location in the pte.
+ * See handle_pte_fault() ...
+ */
+ if (unlikely(!PageSwapCache(page))) {
+ WARN_ON_ONCE(1);
+ ret = false;
+ /* We have to invalidate as we cleared the pte */
+ mmu_notifier_invalidate_range(mm, address,
+ address + PAGE_SIZE);
+ page_vma_mapped_walk_done(&pvmw);
+ break;
+ }
+
if (swap_duplicate(entry) < 0) {
set_pte_at(mm, address, pvmw.pte, pteval);
ret = false;
--
2.25.0