Re: [PATCH 18/24] mm/swap: introduce a helper non fault swapin

From: Chris Li
Date: Tue Dec 12 2023 - 21:23:10 EST


On Tue, Nov 28, 2023 at 3:22 AM Kairui Song <ryncsn@xxxxxxxxx> wrote:
>
> > > /*
> > > * Make sure huge_gfp is always more limited than limit_gfp.
> > > * Some of the flags set permissions, while others set limitations.
> > > @@ -1854,9 +1838,12 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > > {
> > > struct address_space *mapping = inode->i_mapping;
> > > struct shmem_inode_info *info = SHMEM_I(inode);
> > > - struct swap_info_struct *si;
> > > + enum swap_cache_result result;
> > > struct folio *folio = NULL;
> > > + struct mempolicy *mpol;
> > > + struct page *page;
> > > swp_entry_t swap;
> > > + pgoff_t ilx;
> > > int error;
> > >
> > > VM_BUG_ON(!*foliop || !xa_is_value(*foliop));
> > > @@ -1866,34 +1853,30 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
> > > if (is_poisoned_swp_entry(swap))
> > > return -EIO;
> > >
> > > - si = get_swap_device(swap);
> > > - if (!si) {
> > > + mpol = shmem_get_pgoff_policy(info, index, 0, &ilx);
> > > + page = swapin_page_non_fault(swap, gfp, mpol, ilx, fault_mm, &result);
>
> Hi Chris,
>
> I've been trying to address these issues in V2, most issue in other
> patches have a straight solution, some could be discuss in seperate
> series, but I come up with some thoughts here:
>
> >
> > Notice this "result" CAN be outdated. e.g. after this call, the swap
> > cache can be changed by another thread generating the swap page fault
> > and installing the folio into the swap cache or removing it.
>
> This is true, and it seems a potential race also exist before this
> series for direct (no swapcache) swapin path (do_swap_page) if I
> understand it correctly:

I just noticed I missed this email while I was cleaning up my email
archive. Sorry for the late reply. Traveling does not help either.

I am not aware of swap in racing bugs in the existing code. Racing,
yes. If you discover a code path for racing causing bug, please report
it.
>
> In do_swap_page path, multiple process could swapin the page at the
> same time (a mapped once page can still be shared by sub threads),
> they could get different folios. The later pte lock and pte_same check
> is not enough, because while one process is not holding the pte lock,
> another process could read-in, swap_free the entry, then swap-out the
> page again, using same entry, an ABA problem. The race is not likely
> to happen in reality but in theory possible.

Have you taken into account that if the page was locked, then it
wasn't able to change from the swapcache? I think the swap cache find
and get function will return the page locked. Then swapcache will not
be able to change the mapping as long as the page is still locked.

>
> Same issue for shmem here, there are
> shmem_confirm_swap/shmem_add_to_page_cache check later to prevent
> re-installing into shmem mapping for direct swap in, but also not
> enough. Other process could read-in and re-swapout using same entry so
> the mapping entry seems unchanged during the time window. Still very
> unlikely to happen in reality, but not impossible.

Please take a look again with the page lock information. Report back
if you still think there is a racing bug in the existing code. We can
take a closer look at the concurrent call stack to trigger the bug.

Chris

>
> When swapcache is used there is no such issue, since swap lock and
> swap_map are used to sync all readers, and while one reader is still
> holding the folio, the entry is locked through swapcache, or if a
> folio is removed from swapcache, folio_test_swapcache will fail, and
> the reader could retry.
>
> I'm trying to come up with a better locking for direct swap in, am I
> missing anything here? Correct me if I get it wrong...
>