Re: [PATCH v6] zswap: memcontrol: implement zswap writeback disabling

From: Chris Li
Date: Tue Dec 12 2023 - 18:40:10 EST


Hi Kairui,

Thanks for sharing the information on how you use swap.

On Mon, Dec 11, 2023 at 1:31 AM Kairui Song <ryncsn@xxxxxxxxx> wrote:
> > 2) As indicated by this discussion, Tencent has a usage case for SSD
> > and hard disk swap as overflow.
> > https://lore.kernel.org/linux-mm/20231119194740.94101-9-ryncsn@xxxxxxxxx/
> > +Kairui
>
> Yes, we are not using zswap. We are using ZRAM for swap since we have
> many different varieties of workload instances, with a very flexible
> storage setup. Some of them don't have the ability to set up a
> swapfile. So we built a pack of kernel infrastructures based on ZRAM,
> which so far worked pretty well.

This is great. The usage case is actually much more than I expected.
For example, I never thought of zram as a swap tier. Now you mention
it. I am considering whether it makes sense to add zram to the
memory.swap.tiers as well as zswap.

>
> The concern from some teams is that ZRAM (or zswap) can't always free
> up memory so they may lead to higher risk of OOM compared to a
> physical swap device, and they do have suitable devices for doing swap
> on some of their machines. So a secondary swap support is very helpful
> in case of memory usage peak.
>
> Besides this, another requirement is that different containers may
> have different priority, some containers can tolerate high swap
> overhead while some cannot, so swap tiering is useful for us in many
> ways.
>
> And thanks to cloud infrastructure the disk setup could change from
> time to time depending on workload requirements, so our requirement is
> to support ZRAM (always) + SSD (optional) + HDD (also optional) as
> swap backends, while not making things too complex to maintain.

Just curious, do you use ZRAM + SSD + HDD all enabled? Do you ever
consider moving data from ZRAM to SSD, or from SSD to HDD? If you do,
I do see the possibility of having more general swap tiers support and
sharing the shrinking code between tiers somehow. Granted there are
many unanswered questions and a lot of infrastructure is lacking.
Gathering requirements, weight in the priority of the quirement is the
first step towards a possible solution.

> Currently we have implemented a cgroup based ZRAM compression
> algorithm control, per-cgroup ZRAM accounting and limit, and a
> experimental kernel worker to migrate cold swap entry from high
> priority device to low priority device at very small scale (lack of
> basic mechanics to do this at large scale, however due to the low IOPS
> of slow device and cold pages are rarely accessed, this wasn't too
> much of a problem so far but kind of ugly). The rest of swapping (eg.
> secondary swap when ZRAM if full) will depend on the kernel's native
> ability.

Thanks for confirming usage needs of per cgroup ZRAM enable and
flushing between swap devices. I was hoping the swap.tiers can support
some thing like that.

> So far it works, not in the best form, need more patches to make it
> work better (eg. the swapin/readahead patch I sent previously). Some
> of our design may also need to change in the long term, and we also
> want a well built interface and kernel mechanics to manage multi tier
> swaps, I'm very willing to talk and collaborate on this.
>

Great. Let's continue this discussion in a new thread and start
gathering some requirements and priorities from everyone one. The
output of this discussion should be some one pager document listing
the swap tiers requirement and rate the priorities between different
requirements.

Once we have that nail down, we can then discuss what are the
incremental milestones to get there.

I am very interested in this topic and willing to spend time on it as well.

Chris