Re: [PATCH v2 0/4] Extend migrate_misplaced_page() to support batch migration

From: Baolin Wang
Date: Wed Aug 23 2023 - 23:14:27 EST




On 8/22/2023 10:47 AM, Huang, Ying wrote:
Baolin Wang <baolin.wang@xxxxxxxxxxxxxxxxx> writes:

Hi,

Currently, on our ARM servers with NUMA enabled, we found the cross-die latency
is a little larger that will significantly impact the workload's performance.
So on ARM servers we will rely on the NUMA balancing to avoid the cross-die
accessing. And I posted a patchset[1] to support speculative numa fault to
improve the NUMA balancing's performance according to the principle of data
locality. Moreover, thanks to Huang Ying's patchset[2], which introduced batch
migration as a way to reduce the cost of TLB flush, and it will also benefit
the migration of multiple pages all at once during NUMA balancing.

So we plan to continue to support batch migration in do_numa_page() to improve
the NUMA balancing's performance, but before adding complicated batch migration
algorithm for NUMA balancing, some cleanup and preparation work need to do firstly,
which are done in this patch set. In short, this patchset extends the
migrate_misplaced_page() interface to support batch migration, and no functional
changes intended.

In addition, these cleanup can also benefit the compound page's NUMA balancing,
which was discussed in previous thread[3]. IIUC, for the compound page's NUMA
balancing, it is possible that partial pages were successfully migrated, so it is
necessary to return the number of pages that were successfully migrated from
migrate_misplaced_page().

But I don't find the return number is used except as bool now.

As I said above, this is a preparation for batch migration and compound page NUMA balancing in future.

In addition, after looking into the THP' NUMA migration, I found this change is necessary for THP migration. Since it is possible that partial subpages were successfully migrated if the THP is split, so below THP numa fault statistics is not always correct:

if (page_nid != NUMA_NO_NODE)
task_numa_fault(last_cpupid, page_nid, HPAGE_PMD_NR,
flags);

I will try to fix this in next version.

Per my understanding, I still don't find much value of the changes
except as preparation for batch migration in NUMA balancing. So I still

IMO, only patch 3 is just a preparation for batch migration, but other patches are some cleanups for migrate_misplaced_page(). I can drop the preparation patches in this series and revise the commit message.

think it's better to wait for the whole series. Where we can check why
these changes are necessary for batch migration. And I think that you
will provide some number to justify the batch migration, including pros
and cons.

--
Best Regards,
Huang, Ying

This series is based on the latest mm-unstable(d226b59b30cc).

[1] https://lore.kernel.org/lkml/cover.1639306956.git.baolin.wang@xxxxxxxxxxxxxxxxx/t/#mc45929849b5d0e29b5fdd9d50425f8e95b8f2563
[2] https://lore.kernel.org/all/20230213123444.155149-1-ying.huang@xxxxxxxxx/T/#u
[3] https://lore.kernel.org/all/f8d47176-03a8-99bf-a813-b5942830fd73@xxxxxxx/

Changes from v1:
- Move page validation into a new function suggested by Huang Ying.
- Change numamigrate_isolate_page() to boolean type.
- Update some commit message.

Baolin Wang (4):
mm: migrate: factor out migration validation into
numa_page_can_migrate()
mm: migrate: move the numamigrate_isolate_page() into do_numa_page()
mm: migrate: change migrate_misplaced_page() to support multiple pages
migration
mm: migrate: change to return the number of pages migrated
successfully

include/linux/migrate.h | 15 +++++++---
mm/huge_memory.c | 23 +++++++++++++--
mm/internal.h | 1 +
mm/memory.c | 43 ++++++++++++++++++++++++++-
mm/migrate.c | 64 +++++++++--------------------------------
5 files changed, 88 insertions(+), 58 deletions(-)