Re: [PATCH] mm: extend reuse_swap_page range as much as possible

From: Huang\, Ying
Date: Wed Nov 01 2017 - 21:42:30 EST


<zhouxianrong@xxxxxxxxxx> writes:

> From: zhouxianrong <zhouxianrong@xxxxxxxxxx>
>
> origanlly reuse_swap_page requires that the sum of page's
> mapcount and swapcount less than or equal to one.
> in this case we can reuse this page and avoid COW currently.
>
> now reuse_swap_page requires only that page's mapcount
> less than or equal to one and the page is not dirty in
> swap cache. in this case we do not care its swap count.
>
> the page without dirty in swap cache means that it has
> been written to swap device successfully for reclaim before
> and then read again on a swap fault. in this case the page
> can be reused even though its swap count is greater than one
> and postpone the COW on other successive accesses to the swap
> cache page later rather than now.
>
> i did this patch test in kernel 4.4.23 with arm64 and none huge
> memory. it work fine.

Why do you need this? You saved copying one page from memory to memory
(COW) now, at the cost of reading a page from disk to memory later?

Best Regards,
Huang, Ying

> Signed-off-by: zhouxianrong <zhouxianrong@xxxxxxxxxx>
> ---
> mm/swapfile.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index bf91dc9..c21cf07 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1543,22 +1543,27 @@ static int page_trans_huge_map_swapcount(struct page *page, int *total_mapcount,
> bool reuse_swap_page(struct page *page, int *total_map_swapcount)
> {
> int count, total_mapcount, total_swapcount;
> + int dirty;
>
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> if (unlikely(PageKsm(page)))
> return false;
> + dirty = PageDirty(page);
> count = page_trans_huge_map_swapcount(page, &total_mapcount,
> &total_swapcount);
> if (total_map_swapcount)
> *total_map_swapcount = total_mapcount + total_swapcount;
> - if (count == 1 && PageSwapCache(page) &&
> + if ((total_mapcount <= 1 && !dirty) ||
> + (count == 1 && PageSwapCache(page) &&
> (likely(!PageTransCompound(page)) ||
> /* The remaining swap count will be freed soon */
> - total_swapcount == page_swapcount(page))) {
> + total_swapcount == page_swapcount(page)))) {
> if (!PageWriteback(page)) {
> page = compound_head(page);
> delete_from_swap_cache(page);
> SetPageDirty(page);
> + if (!dirty)
> + return true;
> } else {
> swp_entry_t entry;
> struct swap_info_struct *p;