[PATCH v2 0/6] mm/zswap: dstmem reuse optimizations and cleanups

From: Chengming Zhou
Date: Mon Dec 18 2023 - 03:22:42 EST


Hi everyone,

Changes in v2:
- Add more changelog and test data about changing dstmem to one page.
- Reorder patches to put dstmem reusing and __zswap_load() refactoring
together, still refactor after dstmem reusing since we don't want
to handle __zswap_load() failure due to memory allocation failure
in zswap_writeback_entry().
- Append a patch to directly use percpu mutex and buffer in load/store
and refactor out __zswap_store() to simplify zswap_store().
- Link to v1: https://lore.kernel.org/r/20231213-zswap-dstmem-v1-0-896763369d04@xxxxxxxxxxxxx

This series is split from [1] to only include zswap dstmem reuse
optimizations and cleanups, the other part of rbtree breakdown will
be deferred to retest after the rbtree converted to xarray.

And the problem this series tries to optimize is that zswap_load()
and zswap_writeback_entry() have to malloc a temporary memory to
support !zpool_can_sleep_mapped(). We can avoid it by reusing the
percpu crypto_acomp_ctx->dstmem, which is also used by zswap_store()
and protected by the same percpu crypto_acomp_ctx->mutex.

[1] https://lore.kernel.org/all/20231206-zswap-lock-optimize-v1-0-e25b059f9c3a@xxxxxxxxxxxxx/

Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
---
Chengming Zhou (6):
mm/zswap: change dstmem size to one page
mm/zswap: reuse dstmem when decompress
mm/zswap: refactor out __zswap_load()
mm/zswap: cleanup zswap_load()
mm/zswap: cleanup zswap_writeback_entry()
mm/zswap: directly use percpu mutex and buffer in load/store

mm/zswap.c | 331 ++++++++++++++++++++++++++-----------------------------------
1 file changed, 143 insertions(+), 188 deletions(-)
---
base-commit: 1f242c1964cf9b8d663a2fd72159b296205a8126
change-id: 20231213-zswap-dstmem-d828f563303d

Best regards,
--
Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>