Re: [PATCH v2] mm: swap: async free swap slot cache entries

From: Tim Chen
Date: Fri Feb 09 2024 - 12:52:57 EST


On Tue, 2024-02-06 at 17:51 -0800, Chris Li wrote:
> On Tue, Feb 6, 2024 at 5:08 PM Tim Chen <tim.c.chen@xxxxxxxxxxxxxxx> wrote:
> >
> > On Mon, 2024-02-05 at 11:10 -0800, Chris Li wrote:
> > >
> > >
> > > In our system, a really heavy swap load is rare and it means something
> > > is already wrong. At that point the app's SLO is likely at risk,
> > > regardless of long tail swap latency. It is already too late to
> > > address it at the swap fault end. We need to address the source of the
> > > problem which is swapping out too much.
> > >
> > >
> >
> > Could some usage scenarios put more pressure on swap than your
> > usage scenario? Say system with limited RAM and rely on zswap?
> >
> Of course. In that case what I proposed to do will already doing what
> I think is the best of this situation. After grabbing the cache lock
> and finding out async fre hasn't started the freeing yet. Just free
> all 64 entries in the swap slot cache. It is similar to the old code
> behavior.
> Yes, it will have the long tail latency due to batch freeing 64 entries.
> My point is not that I don't care about heavy swap behavior.
> My point is that the app will suffer from the swap strom anyway, it is
> unavoidable. That will be the dominant factor shadowing the batch free
> optimization effect.

The original optimization introducing swap_slots target such heavy
swap use cases when we have fast swap backend to allow higher sustainable
swap throughput. We should not ignore it. And I am afraid your current
patch as is will hurt that performance. If you change the direct free
path to free all entries, that could maintain the throughput and I'll
be okay with that.

>
> Or do I miss your point as you want to purpose the swap cache double
> buffer so it can perform better under swap storm situations?
>

I am not actually proposing doubling the buffer as that proposal have
its own downside.

Tim