Re: [PATCH] mm: swap: async free swap slot cache entries

From: Chris Li
Date: Fri Dec 22 2023 - 18:16:47 EST


On Fri, Dec 22, 2023 at 11:52:08AM -0800, Andrew Morton wrote:
> On Thu, 21 Dec 2023 22:25:39 -0800 Chris Li <chrisl@xxxxxxxxxx> wrote:
>
> > We discovered that 1% swap page fault is 100us+ while 50% of
> > the swap fault is under 20us.
> >
> > Further investigation show that a large portion of the time
> > spent in the free_swap_slots() function for the long tail case.
> >
> > The percpu cache of swap slots is freed in a batch of 64 entries
> > inside free_swap_slots(). These cache entries are accumulated
> > from previous page faults, which may not be related to the current
> > process.
> >
> > Doing the batch free in the page fault handler causes longer
> > tail latencies and penalizes the current process.
> >
> > Move free_swap_slots() outside of the swapin page fault handler into an
> > async work queue to avoid such long tail latencies.
>
> This will require a larger amount of total work than the current

Yes, there will be a tiny little bit of extra overhead to schedule the job
on to the other work queue.

> scheme. So we're trading that off against better latency.
>
> Why is this a good tradeoff?

That is a very good question. Both Hugh and Wei had asked me similar questions
before. +Hugh.

The TL;DR is that it makes the swap more palleralizedable.

Because morden computers typically have more than one CPU and the CPU utilization
is rarely reached to 100%. We are actually not trading the latency for some one
run slower. Most of the time the real impact is that the current swapin page fault
can return quicker so more work can submit to the kernel sooner, at the same time
the other idle CPU can pick up the non latency critical work of freeing of the
swap slot cache entries. The net effect is that we speed things up and increase
the overall system utilization rather than slow things down.

The test result of chromebook and Google production server should be able to show
that it is beneficial to both laptop and server workloads, making them more responsive
in swap related workload.

Chris