Re: [PATCH 2/2] mm: zswap: remove unnecessary tree cleanups in zswap_swapoff()

From: Yosry Ahmed
Date: Thu Jan 25 2024 - 03:02:47 EST


> > // lru list lock held
> > shrink_memcg_cb()
> > swpentry = entry->swpentry
> > // Don't isolate entry from lru list here, just use list_lru_putback()
> > spin_unlock(lru list lock)
> >
> > folio = __read_swap_cache_async(swpentry)
> > if (!folio)
> > return
> >
> > if (!folio_was_allocated)
> > folio_put(folio)
> > return
> >
> > // folio is locked, swapcache is secured against swapoff
> > tree = get tree from swpentry
> > spin_lock(&tree->lock)
>
> That will not work well with zswap to xarray change. We want to remove
> the tree lock and only use the xarray lock.
> The lookup should just hold xarray RCU read lock and return the entry
> with ref count increased.

In this path, we also invalidate the zswap entry, which would require
holding the xarray lock anyway.