Re: [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags

From: Barry Song
Date: Sun Mar 03 2024 - 23:52:40 EST


On Sat, Mar 2, 2024 at 6:08 AM Ryan Roberts <ryan.roberts@xxxxxxx> wrote:
>
> On 01/03/2024 16:44, Ryan Roberts wrote:
> > On 01/03/2024 16:31, Matthew Wilcox wrote:
> >> On Fri, Mar 01, 2024 at 04:27:32PM +0000, Ryan Roberts wrote:
> >>> I've implemented the batching as David suggested, and I'm pretty confident it's
> >>> correct. The only problem is that during testing I can't provoke the code to
> >>> take the path. I've been pouring through the code but struggling to figure out
> >>> under what situation you would expect the swap entry passed to
> >>> free_swap_and_cache() to still have a cached folio? Does anyone have any idea?
> >>>
> >>> This is the original (unbatched) function, after my change, which caused David's
> >>> concern that we would end up calling __try_to_reclaim_swap() far too much:
> >>>
> >>> int free_swap_and_cache(swp_entry_t entry)
> >>> {
> >>> struct swap_info_struct *p;
> >>> unsigned char count;
> >>>
> >>> if (non_swap_entry(entry))
> >>> return 1;
> >>>
> >>> p = _swap_info_get(entry);
> >>> if (p) {
> >>> count = __swap_entry_free(p, entry);
> >>> if (count == SWAP_HAS_CACHE)
> >>> __try_to_reclaim_swap(p, swp_offset(entry),
> >>> TTRS_UNMAPPED | TTRS_FULL);
> >>> }
> >>> return p != NULL;
> >>> }
> >>>
> >>> The trouble is, whenever its called, count is always 0, so
> >>> __try_to_reclaim_swap() never gets called.
> >>>
> >>> My test case is allocating 1G anon memory, then doing madvise(MADV_PAGEOUT) over
> >>> it. Then doing either a munmap() or madvise(MADV_FREE), both of which cause this
> >>> function to be called for every PTE, but count is always 0 after
> >>> __swap_entry_free() so __try_to_reclaim_swap() is never called. I've tried for
> >>> order-0 as well as PTE- and PMD-mapped 2M THP.
> >>
> >> I think you have to page it back in again, then it will have an entry in
> >> the swap cache. Maybe. I know little about anon memory ;-)
> >
> > Ahh, I was under the impression that the original folio is put into the swap
> > cache at swap out, then (I guess) its removed once the IO is complete? I'm sure
> > I'm miles out... what exactly is the lifecycle of a folio going through swap out?
> >
> > I guess I can try forking after swap out, then fault it back in in the child and
> > exit. Then do the munmap in the parent. I guess that could force it? Thanks for
> > the tip - I'll have a play.
>
> That has sort of solved it, the only problem now is that all the folios in the
> swap cache are small (because I don't have Barry's large swap-in series). So
> really I need to figure out how to avoid removing the folio from the cache in
> the first place...

I am quite sure we have a chance to hit a large swapcache even using zRAM -
a sync swapfile and even during swap-out.

I have a test case as below,
1. two threads to run MADV_PAGEOUT
2. two threads to read data being swapped-out

in do_swap_page, from time to time, I can get a large swapcache.

We have a short time window after add_to_swap() and before
__removing_mapping() of
vmscan, a large folio is still in swapcache.

So Ryan, I guess you can trigger this by adding one more thread of
MADV_DONTNEED to do zap_pte_range?


>
> >
> >>
> >> If that doesn't work, perhaps use tmpfs, and use some memory pressure to
> >> force that to swap?
> >>
> >>> I'm guessing the swapcache was already reclaimed as part of MADV_PAGEOUT? I'm
> >>> using a block ram device as my backing store - I think this does synchronous IO
> >>> so perhaps if I have a real block device with async IO I might have more luck?
> >>> Just a guess...
> >>>
> >>> Or perhaps this code path is a corner case? In which case, perhaps its not worth
> >>> adding the batching optimization after all?
> >>>
> >>> Thanks,
> >>> Ryan
> >>>
> >

Thanks
Barry