Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()

From: Chris Li
Date: Thu Jan 25 2024 - 14:04:10 EST


On Thu, Jan 25, 2024 at 12:02 AM Yosry Ahmed <yosryahmed@xxxxxxxxxx> wrote:
>
> > > // lru list lock held
> > > shrink_memcg_cb()
> > > swpentry = entry->swpentry
> > > // Don't isolate entry from lru list here, just use list_lru_putback()
> > > spin_unlock(lru list lock)
> > >
> > > folio = __read_swap_cache_async(swpentry)
> > > if (!folio)
> > > return
> > >
> > > if (!folio_was_allocated)
> > > folio_put(folio)
> > > return
> > >
> > > // folio is locked, swapcache is secured against swapoff
> > > tree = get tree from swpentry
> > > spin_lock(&tree->lock)
> >
> > That will not work well with zswap to xarray change. We want to remove
> > the tree lock and only use the xarray lock.
> > The lookup should just hold xarray RCU read lock and return the entry
> > with ref count increased.
>
> In this path, we also invalidate the zswap entry, which would require
> holding the xarray lock anyway.

It will drop the RCU read lock after finding the entry and re-acquire
the xarray spin lock on invalidation. In between there is a brief
moment without locks.

Chris