Re: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure

From: Byungchul Park
Date: Mon Mar 04 2024 - 23:09:51 EST


On Tue, Mar 05, 2024 at 11:43:45AM +0900, Byungchul Park wrote:
> On Tue, Mar 05, 2024 at 11:37:08AM +0900, Byungchul Park wrote:
> > On Tue, Mar 05, 2024 at 09:54:19AM +0800, Huang, Ying wrote:
> > > Byungchul Park <byungchul@xxxxxx> writes:
> > >
> > > > Changes from v5:
> > > > 1. Make it retry the kswapd's scan priority loop with
> > > > cache_trim_mode off *only if* the mode didn't work in the
> > > > previous loop. (feedbacked by Huang Ying)
> > > > 2. Take into account 'break's from the priority loop when making
> > > > the decision whether to retry. (feedbacked by Huang Ying)
> > > > 3. Update the test result in the commit message.
> > > >
> > > > Changes from v4:
> > > > 1. Make other scans start with may_cache_trim_mode = 1.
> > > >
> > > > Changes from v3:
> > > > 1. Update the test result in the commit message with v4.
> > > > 2. Retry the whole priority loop with cache_trim_mode off again,
> > > > rather than forcing the mode off at the highest priority,
> > > > when the mode doesn't work. (feedbacked by Johannes Weiner)
> > > >
> > > > Changes from v2:
> > > > 1. Change the condition to stop cache_trim_mode.
> > > >
> > > > From - Stop it if it's at high scan priorities, 0 or 1.
> > > > To - Stop it if it's at high scan priorities, 0 or 1, and
> > > > the mode didn't work in the previous turn.
> > > >
> > > > (feedbacked by Huang Ying)
> > > >
> > > > 2. Change the test result in the commit message after testing
> > > > with the new logic.
> > > >
> > > > Changes from v1:
> > > > 1. Add a comment describing why this change is necessary in code
> > > > and rewrite the commit message with how to reproduce and what
> > > > the result is using vmstat. (feedbacked by Andrew Morton and
> > > > Yu Zhao)
> > > > 2. Change the condition to avoid cache_trim_mode from
> > > > 'sc->priority != 1' to 'sc->priority > 1' to reflect cases
> > > > where the priority goes to zero all the way. (feedbacked by
> > > > Yu Zhao)
> > > >
> > > > --->8---
> > > > From f811ee583158fd53d0e94d32ce5948fac4b17cfe Mon Sep 17 00:00:00 2001
> > > > From: Byungchul Park <byungchul@xxxxxx>
> > > > Date: Mon, 4 Mar 2024 15:27:37 +0900
> > > > Subject: [PATCH v6] mm, vmscan: retry kswapd's priority loop with cache_trim_mode off on failure
> > > >
> > > > With cache_trim_mode on, reclaim logic doesn't bother reclaiming anon
> > > > pages. However, it should be more careful to use the mode because it's
> > > > going to prevent anon pages from being reclaimed even if there are a
> > > > huge number of anon pages that are cold and should be reclaimed. Even
> > > > worse, that leads kswapd_failures to reach MAX_RECLAIM_RETRIES and
> > > > stopping kswapd from functioning until direct reclaim eventually works
> > > > to resume kswapd.
> > > >
> > > > So kswapd needs to retry its scan priority loop with cache_trim_mode
> > > > off again if the mode doesn't work for reclaim.
> > > >
> > > > The problematic behavior can be reproduced by:
> > > >
> > > > CONFIG_NUMA_BALANCING enabled
> > > > sysctl_numa_balancing_mode set to NUMA_BALANCING_MEMORY_TIERING
> > > > numa node0 (8GB local memory, 16 CPUs)
> > > > numa node1 (8GB slow tier memory, no CPUs)
> > > >
> > > > Sequence:
> > > >
> > > > 1) echo 3 > /proc/sys/vm/drop_caches
> > > > 2) To emulate the system with full of cold memory in local DRAM, run
> > > > the following dummy program and never touch the region:
> > > >
> > > > mmap(0, 8 * 1024 * 1024 * 1024, PROT_READ | PROT_WRITE,
> > > > MAP_ANONYMOUS | MAP_PRIVATE | MAP_POPULATE, -1, 0);
> > > >
> > > > 3) Run any memory intensive work e.g. XSBench.
> > > > 4) Check if numa balancing is working e.i. promotion/demotion.
> > > > 5) Iterate 1) ~ 4) until numa balancing stops.
> > > >
> > > > With this, you could see that promotion/demotion are not working because
> > > > kswapd has stopped due to ->kswapd_failures >= MAX_RECLAIM_RETRIES.
> > > >
> > > > Interesting vmstat delta's differences between before and after are like:
> > > >
> > > > +-----------------------+-------------------------------+
> > > > | interesting vmstat | before | after |
> > > > +-----------------------+-------------------------------+
> > > > | nr_inactive_anon | 321935 | 1664772 |
> > > > | nr_active_anon | 1780700 | 437834 |
> > > > | nr_inactive_file | 30425 | 40882 |
> > > > | nr_active_file | 14961 | 3012 |
> > > > | pgpromote_success | 356 | 1293122 |
> > > > | pgpromote_candidate | 21953245 | 1824148 |
> > > > | pgactivate | 1844523 | 3311907 |
> > > > | pgdeactivate | 50634 | 1554069 |
> > > > | pgfault | 31100294 | 6518806 |
> > > > | pgdemote_kswapd | 30856 | 2230821 |
> > > > | pgscan_kswapd | 1861981 | 7667629 |
> > > > | pgscan_anon | 1822930 | 7610583 |
> > > > | pgscan_file | 39051 | 57046 |
> > > > | pgsteal_anon | 386 | 2192033 |
> > > > | pgsteal_file | 30470 | 38788 |
> > > > | pageoutrun | 30 | 412 |
> > > > | numa_hint_faults | 27418279 | 2875955 |
> > > > | numa_pages_migrated | 356 | 1293122 |
> > > > +-----------------------+-------------------------------+
> > > >
> > > > Signed-off-by: Byungchul Park <byungchul@xxxxxx>
> > > > ---
> > > > mm/vmscan.c | 21 ++++++++++++++++++++-
> > > > 1 file changed, 20 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > > > index bba207f41b14..6fe45eca7766 100644
> > > > --- a/mm/vmscan.c
> > > > +++ b/mm/vmscan.c
> > > > @@ -108,6 +108,12 @@ struct scan_control {
> > > > /* Can folios be swapped as part of reclaim? */
> > > > unsigned int may_swap:1;
> > > >
> > > > + /* Not allow cache_trim_mode to be turned on as part of reclaim? */
> > > > + unsigned int no_cache_trim_mode:1;
> > > > +
> > > > + /* Has cache_trim_mode failed at least once? */
> > > > + unsigned int cache_trim_mode_failed:1;
> > > > +
> > > > /* Proactive reclaim invoked by userspace through memory.reclaim */
> > > > unsigned int proactive:1;
> > > >
> > > > @@ -2268,7 +2274,8 @@ static void prepare_scan_control(pg_data_t *pgdat, struct scan_control *sc)
> > > > * anonymous pages.
> > > > */
> > > > file = lruvec_page_state(target_lruvec, NR_INACTIVE_FILE);
> > > > - if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE))
> > > > + if (file >> sc->priority && !(sc->may_deactivate & DEACTIVATE_FILE) &&
> > > > + !sc->no_cache_trim_mode)
> > > > sc->cache_trim_mode = 1;
> > > > else
> > > > sc->cache_trim_mode = 0;
> > > > @@ -5967,6 +5974,8 @@ static void shrink_node(pg_data_t *pgdat, struct scan_control *sc)
> > > > */
> > > > if (reclaimable)
> > > > pgdat->kswapd_failures = 0;
> > > > + else if (sc->cache_trim_mode)
> > > > + sc->cache_trim_mode_failed = 1;
> > > > }
> > > >
> > > > /*
> > > > @@ -6898,6 +6907,16 @@ static int balance_pgdat(pg_data_t *pgdat, int order, int highest_zoneidx)
> > > > sc.priority--;
> > > > } while (sc.priority >= 1);
> > > >
> > > > + /*
> > > > + * Restart only if it went through the priority loop all the way,
> > > > + * but cache_trim_mode didn't work.
> > > > + */
> > > > + if (!sc.nr_reclaimed && sc.priority < 1 &&
> > > > + !sc.no_cache_trim_mode && sc.cache_trim_mode_failed) {
> > >
> > > Can we just use sc.cache_trim_mode (instead of
> > > sc.cache_trim_mode_failed) here? That is, if cache_trim_mode is enabled
> >
> > As Johannes mentioned, within a priority scan, all the numa nodes are
> > scanned each with its own value of cache_trim_mode. So we cannot use
> > cache_trim_mode for that purpose.
>
> Ah, okay. Confining to kswapd, that might make sense. I will apply it if
> there's no objection to it. Thanks.

I didn't want to introduce two additional flags either, but it was
possible to make it do exactly what we want it to do thanks to the flags.
I'd like to keep this version if possible unless there are any other
objections on it.

Byungchul

> Byungchul
> >
> > Byungchul
> >
> > > for priority == 1 and failed to reclaim, we will restart. If this
> > > works, we can avoid to add another flag.
> > >
> > > > + sc.no_cache_trim_mode = 1;
> > > > + goto restart;
> > > > + }
> > > > +
> > > > if (!sc.nr_reclaimed)
> > > > pgdat->kswapd_failures++;
> > >
> > > --
> > > Best Regards,
> > > Huang, Ying