Re: [PATCHv2 4/8] khugepaged: Drain LRU add pagevec after swapin

From: Yang Shi
Date: Wed Apr 08 2020 - 14:43:26 EST




On 4/8/20 6:05 AM, Kirill A. Shutemov wrote:
On Mon, Apr 06, 2020 at 11:29:11AM -0700, Yang Shi wrote:

On 4/3/20 4:29 AM, Kirill A. Shutemov wrote:
__collapse_huge_page_isolate() may fail due to extra pin in the LRU add
pagevec. It's petty common for swapin case: we swap in pages just to
fail due to the extra pin.

Drain LRU add pagevec on sucessfull swapin.

Signed-off-by: Kirill A. Shutemov <kirill.shutemov@xxxxxxxxxxxxxxx>
---
mm/khugepaged.c | 5 +++++
1 file changed, 5 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index fdc10ffde1ca..57ff287caf6b 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -940,6 +940,11 @@ static bool __collapse_huge_page_swapin(struct mm_struct *mm,
}
vmf.pte--;
pte_unmap(vmf.pte);
+
+ /* Drain LRU add pagevec to remove extra pin on the swapped in pages */
+ if (swapped_in)
+ lru_add_drain();
There is already lru_add_drain() called in swap readahead path, please see
swap_vma_readahead() and swap_cluster_readahead().
But not for synchronous case. See SWP_SYNCHRONOUS_IO branch in
do_swap_page().

Aha, yes. I missed the synchronous case.


Maybe we should drain it in swap_readpage() or in do_swap_page() after
swap_readpage()? I donno.

It may be better to keep it as is. Draining lru for every page for synchronous case in do_swap_page() path sounds not very productive. Doing it in khugepaged seems acceptable. We just drain lru cache again for non-synchronous case, but the cache may be already empty so it should take very short time since nothing to drain.