[PATCH v2 0/7] disable pcplists during memory offline

From: Vlastimil Babka
Date: Thu Oct 08 2020 - 07:42:31 EST


Changes since v1 [7]:

- add acks/reviews (thanks David and Michal)
- drop "mm, page_alloc: make per_cpu_pageset accessible only after init" as
that's orthogonal and needs more consideration
- squash "mm, page_alloc: drain all pcplists during memory offline" into the
last patch, and move new zone_pcp_* functions into mm/page_alloc. As such,
the new 'force all cpus' param of __drain_all_pages() is never exported
outside page_alloc.c so I didn't add a new wrapper function to hide the bool
- keep pcp_batch_high_lock a mutex as offline_pages is synchronized anyway,
as suggested by Michal. Thus we don't need atomic variable and sync around
it, and patch is much smaller. If alloc_contic_range() wants to use the new
functionality and keep parallelism, we can add that on top.

As per the discussions [1] [2] this is an attempt to implement David's
suggestion that page isolation should disable pcplists to avoid races with page
freeing in progress. This is done without extra checks in fast paths, as
explained in Patch 9. The repeated draining done by [2] is then no longer
needed. Previous version (RFC) is at [3].

The RFC tried to hide pcplists disabling/enabling into page isolation, but it
wasn't completely possible, as memory offline does not unisolation. Michal
suggested an explicit API in [4] so that's the current implementation and it
seems indeed nicer.

Once we accept that page isolation users need to do explicit actions around it
depending on the needed guarantees, we can also IMHO accept that the current
pcplist draining can be also done by the callers, which is more effective.
After all, there are only two users of page isolation. So patch 6 does
effectively the same thing as Pavel proposed in [5], and patch 7 implement
stronger guarantees only for memory offline. If CMA decides to opt-in to the
stronger guarantee, it can be added later.

Patches 1-5 are preparatory cleanups for pcplist disabling.

Patchset was briefly tested in QEMU so that memory online/offline works, but
I haven't done a stress test that would prove the race fixed by [2] is
eliminated.

Note that patch 7 could be avoided if we instead adjusted page freeing in shown
in [6], but I believe the current implementation of disabling pcplists is not
too much complex, so I would prefer this instead of adding new checks and
longer irq-disabled section into page freeing hotpaths.

[1] https://lore.kernel.org/linux-mm/20200901124615.137200-1-pasha.tatashin@xxxxxxxxxx/
[2] https://lore.kernel.org/linux-mm/20200903140032.380431-1-pasha.tatashin@xxxxxxxxxx/
[3] https://lore.kernel.org/linux-mm/20200907163628.26495-1-vbabka@xxxxxxx/
[4] https://lore.kernel.org/linux-mm/20200909113647.GG7348@xxxxxxxxxxxxxx/
[5] https://lore.kernel.org/linux-mm/20200904151448.100489-3-pasha.tatashin@xxxxxxxxxx/
[6] https://lore.kernel.org/linux-mm/3d3b53db-aeaa-ff24-260b-36427fac9b1c@xxxxxxx/
[7] https://lore.kernel.org/linux-mm/20200922143712.12048-1-vbabka@xxxxxxx/

Vlastimil Babka (7):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: calculate pageset high and batch once per zone
mm, page_alloc: remove setup_pageset()
mm, page_alloc: simplify pageset_update()
mm, page_alloc: cache pageset high and batch in struct zone
mm, page_alloc: move draining pcplists to page isolation users
mm, page_alloc: disable pcplists during memory offline

include/linux/mmzone.h | 6 ++
mm/internal.h | 2 +
mm/memory_hotplug.c | 27 +++---
mm/page_alloc.c | 198 ++++++++++++++++++++++++-----------------
mm/page_isolation.c | 10 +--
5 files changed, 143 insertions(+), 100 deletions(-)

--
2.28.0