Re: [PATCH 1/2] mm: zswap: optimize zswap pool size tracking

From: Johannes Weiner
Date: Mon Mar 11 2024 - 22:34:32 EST


On Mon, Mar 11, 2024 at 10:09:35PM +0000, Yosry Ahmed wrote:
> On Mon, Mar 11, 2024 at 12:12:13PM -0400, Johannes Weiner wrote:
> > Profiling the munmap() of a zswapped memory region shows 50%(!) of the
> > total cycles currently going into updating the zswap_pool_total_size.
>
> Yikes. I have always hated that size update scheme FWIW.
>
> I have also wondered whether it makes sense to just maintain the number
> of pages in zswap as an atomic, like zswap_stored_pages. I guess your
> proposed scheme is even cheaper for the load/invalidate paths because we
> do nothing at all. It could be an option if the aggregation in other
> paths ever becomes a problem, but we would need to make sure it
> doesn't regress the load/invalidate paths. Just sharing some thoughts.

Agree with you there. I actually tried doing it that way at first, but
noticed zram uses zs_get_total_pages() and actually wants a per-pool
count. I didn't want the backend to have to update two atomics, so I
settled for this version.

> > There are three consumers of this counter:
> > - store, to enforce the globally configured pool limit
> > - meminfo & debugfs, to report the size to the user
> > - shrink, to determine the batch size for each cycle
> >
> > Instead of aggregating everytime an entry enters or exits the zswap
> > pool, aggregate the value from the zpools on-demand:
> >
> > - Stores aggregate the counter anyway upon success. Aggregating to
> > check the limit instead is the same amount of work.
> >
> > - Meminfo & debugfs might benefit somewhat from a pre-aggregated
> > counter, but aren't exactly hotpaths.
> >
> > - Shrinking can aggregate once for every cycle instead of doing it for
> > every freed entry. As the shrinker might work on tens or hundreds of
> > objects per scan cycle, this is a large reduction in aggregations.
> >
> > The paths that benefit dramatically are swapin, swapoff, and
> > unmaps. There could be millions of pages being processed until
> > somebody asks for the pool size again. This eliminates the pool size
> > updates from those paths entirely.
>
> This looks like a big win, thanks! I wonder if you have any numbers of
> perf profiles to share. That would be nice to have, but I think the
> benefit is clear regardless.

I deleted the perf files already, but can re-run it tomorrow.

> I also like the implicit cleanup when we switch to maintaining the
> number of pages rather than bytes. The code looks much better with all
> the shifts and divisions gone :)
>
> I have a couple of comments below. With them addressed, feel free to
> add:
> Acked-by: Yosry Ahmed <yosryahmed@xxxxxxxxxx>

Thanks!

> > @@ -1385,6 +1365,10 @@ static void shrink_worker(struct work_struct *w)
> > {
> > struct mem_cgroup *memcg;
> > int ret, failures = 0;
> > + unsigned long thr;
> > +
> > + /* Reclaim down to the accept threshold */
> > + thr = zswap_max_pages() * zswap_accept_thr_percent / 100;
>
> This calculation is repeated twice, so I'd rather keep a helper for it
> as an alternative to zswap_can_accept(). Perhaps zswap_threshold_page()
> or zswap_acceptance_pages()?

Sounds good. I went with zswap_accept_thr_pages().

> > @@ -1711,6 +1700,13 @@ void zswap_swapoff(int type)
> >
> > static struct dentry *zswap_debugfs_root;
> >
> > +static int debugfs_get_total_size(void *data, u64 *val)
> > +{
> > + *val = zswap_total_pages() * PAGE_SIZE;
> > + return 0;
> > +}
> > +DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu");
>
> I think we are missing a newline here to maintain the current format
> (i.e "%llu\n").

Oops, good catch! I had verified the debugfs file (along with the
others) with 'grep . *', which hides that this is missing. Fixed up.

Thanks for taking a look. The incremental diff is below. I'll run the
tests and recapture the numbers tomorrow, then send v2.

diff --git a/mm/zswap.c b/mm/zswap.c
index 7c39327a7cc2..1a5cc7298306 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -504,6 +504,11 @@ static unsigned long zswap_max_pages(void)
return totalram_pages() * zswap_max_pool_percent / 100;
}

+static unsigned long zswap_accept_thr_pages(void)
+{
+ return zswap_max_pages() * zswap_accept_thr_percent / 100;
+}
+
unsigned long zswap_total_pages(void)
{
struct zswap_pool *pool;
@@ -1368,7 +1373,7 @@ static void shrink_worker(struct work_struct *w)
unsigned long thr;

/* Reclaim down to the accept threshold */
- thr = zswap_max_pages() * zswap_accept_thr_percent / 100;
+ thr = zswap_accept_thr_pages();

/* global reclaim will select cgroup in a round-robin fashion. */
do {
@@ -1493,9 +1498,7 @@ bool zswap_store(struct folio *folio)
}

if (zswap_pool_reached_full) {
- unsigned long thr = max_pages * zswap_accept_thr_percent / 100;
-
- if (cur_pages > thr)
+ if (cur_pages > zswap_accept_thr_pages())
goto shrink;
else
zswap_pool_reached_full = false;
@@ -1705,7 +1708,7 @@ static int debugfs_get_total_size(void *data, u64 *val)
*val = zswap_total_pages() * PAGE_SIZE;
return 0;
}
-DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu");
+DEFINE_DEBUGFS_ATTRIBUTE(total_size_fops, debugfs_get_total_size, NULL, "%llu\n");

static int zswap_debugfs_init(void)
{