Re: [External] Re: [PATCH] mm:zswap: fix zswap entry reclamation failure in two scenarios

From: 贺中坤
Date: Tue Nov 14 2023 - 00:22:14 EST


Thanks for your time, Nhat.

>
> These two cases should not count as "successful writeback" right?
>

This is true from the perspective of the writeback itself, but should it
also be considered successful from the purpose of the writeback,
i.e. whether the compressed memory and zswap_entry can be reclaimed?

> I'm slightly biased of course, since my zswap shrinker depends on this
> as one of the potential signals for over-shrinking - but that aside, I think
> that this constitutes a failed writeback (i.e should not increment writeback
> counter, and the limit-based reclaim should try again etc.). If anything,
> it will make it incredibly confusing for users.

This patch will skip the writeback step,so the writeback counter will not
be incremented. Currently MAX_RECLAIM_RETRIES is 14, shrink_worker
will often fail if writeback fails.

>
> For instance, we were trying to estimate the number of zswap store
> fails by subtracting the writeback count from the overall pswpout, and
> this could throw us off by inflating the writeback count, and deflating
> the zswap store failure count as a result.

As mentioned above, writeback counter will not be incremented.

>
> Regarding the second case specifically, I thought that was the point of
> having zswap_exclusive_loads_enabled disabled - i.e still keeps a copy
> around in the zswap pool even after a completed zswap_load? Based
> on the Kconfig documentation:
>
> "This avoids having two copies of the same page in memory
> (compressed and uncompressed) after faulting in a page from zswap.
> The cost is that if the page was never dirtied and needs to be
> swapped out again, it will be re-compressed."
>

Yes,i know the point,in the case of reading, there is no data update,
so the next swapout does not need to be compressed again.
Consider this scenario,there is a lot of data cached in memory and zswap,
hit the limit,and shrink_worker will fail. The new coming data be written
directly to swap due to zswap_store failure. Should we free the last one
to store the latest one in zswap.