Re: [PATCH v10] mm: vmscan: try to reclaim swapcache pages if no swap space

From: Yosry Ahmed
Date: Mon Nov 27 2023 - 16:57:08 EST


On Mon, Nov 27, 2023 at 1:32 PM Minchan Kim <minchan@xxxxxxxxxx> wrote:
>
> On Mon, Nov 27, 2023 at 12:22:59AM -0800, Chris Li wrote:
> > On Mon, Nov 27, 2023 at 12:14 AM Huang, Ying <ying.huang@xxxxxxxxx> wrote:
> > > > I agree with Ying that anonymous pages typically have different page
> > > > access patterns than file pages, so we might want to treat them
> > > > differently to reclaim them effectively.
> > > > One random idea:
> > > > How about we put the anonymous page in a swap cache in a different LRU
> > > > than the rest of the anonymous pages. Then shrinking against those
> > > > pages in the swap cache would be more effective.Instead of having
> > > > [anon, file] LRU, now we have [anon not in swap cache, anon in swap
> > > > cache, file] LRU
> > >
> > > I don't think that it is necessary. The patch is only for a special use
> > > case. Where the swap device is used up while some pages are in swap
> > > cache. The patch will kill performance, but it is used to avoid OOM
> > > only, not to improve performance. Per my understanding, we will not use
> > > up swap device space in most cases. This may be true for ZRAM, but will
> > > we keep pages in swap cache for long when we use ZRAM?
> >
> > I ask the question regarding how many pages can be freed by this patch
> > in this email thread as well, but haven't got the answer from the
> > author yet. That is one important aspect to evaluate how valuable is
> > that patch.
>
> Exactly. Since swap cache has different life time with page cache, they
> would be usually dropped when pages are unmapped(unless they are shared
> with others but anon is usually exclusive private) so I wonder how much
> memory we can save.

I think the point of this patch is not saving memory, but rather
avoiding an OOM condition that will happen if we have no swap space
left, but some pages left in the swap cache. Of course, the OOM
avoidance will come at the cost of extra work in reclaim to swap those
pages out.

The only case where I think this might be harmful is if there's plenty
of pages to reclaim on the file LRU, and instead we opt to chase down
the few swap cache pages. So perhaps we can add a check to only set
sc->swapcache_only if the number of pages in the swap cache is more
than the number of pages on the file LRU or similar? Just make sure we
don't chase the swapcache pages down if there's plenty to scan on the
file LRU?

>
> > Regarding running out of swap space. That is a good point, in server
> > workload we don't typically run out of swap device space anyway.
> >
> > Android uses ZRAM, the story might be different. Adding Minchan here.
>
> Swap is usually almost full in Android since it compacts(i.e., swapout)
> background apps aggressively.