Re: [PATCH v4] mm: swap: async free swap slot cache entries

From: Tim Chen
Date: Thu Feb 15 2024 - 20:38:45 EST


On Thu, 2024-02-15 at 16:11 -0800, Andrew Morton wrote:
> On Wed, 14 Feb 2024 17:02:13 -0800 Chris Li <chrisl@xxxxxxxxxx> wrote:
>
> > We discovered that 1% swap page fault is 100us+ while 50% of
> > the swap fault is under 20us.
> >
> > Further investigation shows that a large portion of the time
> > spent in the free_swap_slots() function for the long tail case.
> >
> > The percpu cache of swap slots is freed in a batch of 64 entries
> > inside free_swap_slots(). These cache entries are accumulated
> > from previous page faults, which may not be related to the current
> > process.
> >
> > Doing the batch free in the page fault handler causes longer
> > tail latencies and penalizes the current process.
> >
> > When the swap cache slot is full, schedule async free cached
> > swap slots in a work queue, before the next swap fault comes in.
> > If the next swap fault comes in very fast, before the async
> > free gets a chance to run. It will directly free all the swap
> > cache in the swap fault the same way as previously.
> >
> > Testing:
> >
> > Chun-Tse did some benchmark in chromebook, showing that
> > zram_wait_metrics improve about 15% with 80% and 95% confidence.
> >
> > I recently ran some experiments on about 1000 Google production
> > machines. It shows swapin latency drops in the long tail
> > 100us - 500us bucket dramatically.
> >
> > platform (100-500us) (0-100us)
> > A 1.12% -> 0.36% 98.47% -> 99.22%
> > B 0.65% -> 0.15% 98.96% -> 99.46%
> > C 0.61% -> 0.23% 98.96% -> 99.38%
> >
>
> What this description lacks is any description of why anyone cares.
>
> The patch clearly decreases overall throughput (speed-vs-latency is a
> common tradeoff).
>
> And the "we don't know how to fix this properly so punt it into a
> kernel thread" approach remains lame. For example, the risk that the
> now-liberated allocator can outpace the async freeing, resulting in
> unlimited object windup.


Andrew,

What you are saying about outpacing asyn free is true for v1 and v2 versions of the patch.

But in this latest version, if another reclaim comes in before the async free has kicked in,
we would be freeing the whole cache directly, same as original code, without waiting
for the async free. It is different from the first version
where you go into the free one at a time mode while waiting for the async free. 
That was also my objection to the first two versions as you could be in this
slow free one at a time mode for a long time.

So now we should not have unlimited object windup. And we would be doing free
in batch of 64, either still in the direct path or in the async path.

>
>
> And here's a fun one: what happens if the producer of these objects has
> SCHED_FIFO policy and it's a uniprocessor machine? If the producer sits
> there allocating objects and the freeing thread never executes? Has
> this been considered, and tested for?

If the free thread did not execute, in this version of the patch, we would
free the full cache directly, should the allocate path see a full cache. This works
just as before the patch is applied.
So I do not forsee current change reducing throughput.

Current patch does allow a chance to do background free, so it cut down the
chances that allocate path needs to free the cache directly.

That should help the tail latency and the number of times where you have to wait for
the free to be complete. And most of the time, we would not have to do direct free
ourselves.

Tim

>
>
> All these concerns, risks and complexity and the changelog offers us no
> reason to take any of this on. What's wrong with the existing code?
> Please exhaustively describe the issues which are being seen. And
> explain why those issues are sufficiently serious to leave the above
> issues and risks unaddressed.
>