Re: [PATCH v5] zswap: replace RB tree with xarray

From: Yosry Ahmed
Date: Mon Mar 11 2024 - 19:31:45 EST


On Mon, Mar 11, 2024 at 03:26:05PM -0700, Chris Li wrote:
> Very deep RB tree requires rebalance at times. That
> contributes to the zswap fault latencies. Xarray does not
> need to perform tree rebalance. Replacing RB tree to xarray
> can have some small performance gain.
>
> One small difference is that xarray insert might fail with
> ENOMEM, while RB tree insert does not allocate additional
> memory.
>
> The zswap_entry size will reduce a bit due to removing the
> RB node, which has two pointers and a color field. Xarray
> store the pointer in the xarray tree rather than the
> zswap_entry. Every entry has one pointer from the xarray
> tree. Overall, switching to xarray should save some memory,
> if the swap entries are densely packed.
>
> Notice the zswap_rb_search and zswap_rb_insert always
> followed by zswap_rb_erase. Use xa_erase and xa_store
> directly. That saves one tree lookup as well.
>
> Remove zswap_invalidate_entry due to no need to call
> zswap_rb_erase any more. Use zswap_free_entry instead.
>
> The "struct zswap_tree" has been replaced by "struct xarray".
> The tree spin lock has transferred to the xarray lock.
>
> Run the kernel build testing 10 times for each version, averages:
> (memory.max=2GB, zswap shrinker and writeback enabled,
> one 50GB swapfile, 24 HT core, 32 jobs)
>
> mm-9a0181a3710eb xarray v5
> user 3532.385 3535.658
> sys 536.231 530.083
> real 200.431 200.176

It would be more useful to show figures for the reduction of the tail
latency since this is what is advertised as the main benefit. Anyway,
there is a small win in the sys build time, and a *lot* of code
simplification, so I like this change regardless.

>
> ---
>
>
> Signed-off-by: Chris Li <chrisl@xxxxxxxxxx>
> ---
> Changes in v5:
> - Remove zswap_xa_insert(), call xa_store and xa_erase directly.
> - Remove zswap_reject_xarray_fail.
> - Link to v4: https://lore.kernel.org/r/20240304-zswap-xarray-v4-1-c4b45670cc30@xxxxxxxxxx
>
> Changes in v4:
> - Remove zswap_xa_search_and_earse, use xa_erase directly.
> - Move charge of objcg after zswap_xa_insert.
> - Avoid erase old entry on insert fail error path.
> - Remove not needed swap_zswap_tree change
> - Link to v3: https://lore.kernel.org/r/20240302-zswap-xarray-v3-1-5900252f2302@xxxxxxxxxx
>
> Changes in v3:
> - Use xa_cmpxchg instead of zswap_xa_search_and_delete in zswap_writeback_entry.
> - Use xa_store in zswap_xa_insert directly. Reduce the scope of spinlock.
> - Fix xa_store error handling for same page fill case.
> - Link to v2: https://lore.kernel.org/r/20240229-zswap-xarray-v2-1-e50284dfcdb1@xxxxxxxxxx
>
> Changes in v2:
> - Replace struct zswap_tree with struct xarray.
> - Remove zswap_tree spinlock, use xarray lock instead.
> - Fold zswap_rb_erase() into zswap_xa_search_and_delete() and zswap_xa_insert().
> - Delete zswap_invalidate_entry(), use zswap_free_entry() instead.
> - Link to v1: https://lore.kernel.org/r/20240117-zswap-xarray-v1-0-6daa86c08fae@xxxxxxxxxx
> ---
> mm/zswap.c | 166 +++++++++++++++----------------------------------------------
> 1 file changed, 41 insertions(+), 125 deletions(-)

Nice diffstat :)

Generally LGTM. With a couple of comments below, feel free to add:
Acked-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>

[..]
> @@ -1555,28 +1473,32 @@ bool zswap_store(struct folio *folio)
> insert_entry:
> entry->swpentry = swp;
> entry->objcg = objcg;
> - if (objcg) {
> - obj_cgroup_charge_zswap(objcg, entry->length);
> - /* Account before objcg ref is moved to tree */
> - count_objcg_event(objcg, ZSWPOUT);
> - }
>
> - /* map */
> - spin_lock(&tree->lock);
> /*
> * The folio may have been dirtied again, invalidate the
> * possibly stale entry before inserting the new entry.
> */
> - if (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
> - zswap_invalidate_entry(tree, dupentry);
> - WARN_ON(zswap_rb_insert(&tree->rbroot, entry, &dupentry));
> + old = xa_store(tree, offset, entry, GFP_KERNEL);
> +

extra blank line here

> + if (xa_is_err(old)) {
> + if (xa_err(old) == -ENOMEM)
> + zswap_reject_alloc_fail++;

I think we want to WARN for any other return codes as they are
unexpected?