Re: [PATCH RFC 1/1] mm/swap: queue reclaimable folio to local rotate batch when !folio_test_lru()

From: Yu Zhao
Date: Sun Feb 18 2024 - 03:09:18 EST


On Sat, Feb 17, 2024 at 9:52 PM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
>
> On 2024/2/15 15:06, Yu Zhao wrote:
> > On Wed, Feb 14, 2024 at 4:18 AM Chengming Zhou <chengming.zhou@xxxxxxxxx> wrote:
> >>
> >> On 2024/2/14 15:13, Yu Zhao wrote:
> >>> On Fri, Feb 9, 2024 at 6:00 AM <chengming.zhou@xxxxxxxxx> wrote:
> >>>>
> >>>> From: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
> >>>>
> >>>> All LRU move interfaces have a problem that it has no effect if the
> >>>> folio is isolated from LRU (in cpu batch or isolated by shrinker).
> >>>> Since it can't move/change folio LRU status when it's isolated, mostly
> >>>> just clear the folio flag and do nothing in this case.
> >>>>
> >>>> In our case, a written back and reclaimable folio won't be rotated to
> >>>> the tail of inactive list, since it's still in cpu lru_add batch. It
> >>>> may cause the delayed reclaim of this folio and evict other folios.
> >>>>
> >>>> This patch changes to queue the reclaimable folio to cpu rotate batch
> >>>> even when !folio_test_lru(), hoping it will likely be handled after
> >>>> the lru_add batch which will put folio on the LRU list first, so
> >>>> will be rotated to the tail successfully when handle rotate batch.
> >>>>
> >>>> Signed-off-by: Chengming Zhou <zhouchengming@xxxxxxxxxxxxx>
> >>>
> >>> I don't think the analysis is correct. IIRC, writeback from non
> >>> reclaim paths doesn't require isolation and the reclaim path doesn't
> >>> use struct folio_batch lru_add.
> >>
> >> Ah, my bad, I forgot to mention the important context in the message:
> >>
> >> This is not from the normal reclaim context, it's from zswap writeback
> >> reclaim context, which will first set PG_reclaim flag, then submit the
> >> async writeback io.
> >>
> >> If the writeback io complete fast enough, folio_rotate_reclaimable()
> >> will be called before that folio put on LRU list (it still in the local
> >> lru_add batch, so it's somewhat like isolated too)
> >>
> >>>
> >>> Did you see any performance improvements with this patch? In general,
> >>> this kind of patches should have performance numbers to show it really
> >>> helps (not just in theory).
> >>
> >> Right, there are some improvements, the numbers are put in cover letter.
> >> But this solution is not good enough, just RFC for discussion. :)
> >>
> >> mm-unstable-hot zswap-lru-reclaim
> >> real 63.34 62.72
> >> user 1063.20 1060.30
> >> sys 272.04 256.14
> >> workingset_refault_anon 2103297.00 1788155.80
> >> workingset_refault_file 28638.20 39249.40
> >> workingset_activate_anon 746134.00 695435.40
> >> workingset_activate_file 4344.60 4255.80
> >> workingset_restore_anon 653163.80 605315.60
> >> workingset_restore_file 1079.00 883.00
> >> workingset_nodereclaim 0.00 0.00
> >> pgscan 12971305.60 12730331.20
> >> pgscan_kswapd 0.00 0.00
> >> pgscan_direct 12971305.60 12730331.20
> >> pgscan_khugepaged 0.00 0.00
> >>
> >>>
> >>> My guess is that you are hitting this problem [1].
> >>>
> >>> [1] https://lore.kernel.org/linux-mm/20221116013808.3995280-1-yuzhao@xxxxxxxxxx/
> >>
> >> Right, I just see it, it's the same problem. The only difference is that
> >> in your case the folio is isolated by shrinker, in my case, the folio is
> >> in cpu lru_add batch. Anyway, the result is the same, that folio can't be
> >> rotated successfully when writeback complete.
> >
> > In that case, a better solution would be to make lru_add add
> > (_reclaim() && !_dirty() && !_writeback()) folios at the tail.
> > (_rotate() needs to leave _reclaim() set if it fails to rotate.)
>
> Right, this is a solution. But PG_readahead is alias of PG_reclaim,
> I'm afraid this would rotate readahead folio to the inactive tail.

Then drain before setting readahead, since readahead isn't set on every folio.