Re: [PATCH] mm: remove lock_page_memcg() from rmap

From: Hugh Dickins
Date: Thu Nov 24 2022 - 01:03:49 EST


On Wed, 23 Nov 2022, Johannes Weiner wrote:

> rmap changes (mapping and unmapping) of a page currently take
> lock_page_memcg() to serialize 1) update of the mapcount and the
> cgroup mapped counter with 2) cgroup moving the page and updating the
> old cgroup and the new cgroup counters based on page_mapped().
>
> Before b2052564e66d ("mm: memcontrol: continue cache reclaim from
> offlined groups"), we used to reassign all pages that could be found
> on a cgroup's LRU list on deletion - something that rmap didn't
> naturally serialize against. Since that commit, however, the only
> pages that get moved are those mapped into page tables of a task
> that's being migrated. In that case, the pte lock is always held (and
> we know the page is mapped), which keeps rmap changes at bay already.
>
> The additional lock_page_memcg() by rmap is redundant. Remove it.
>
> Signed-off-by: Johannes Weiner <hannes@xxxxxxxxxxx>

Thank you, I love it: but with sorrow and shame, NAK to this version.

I was gearing up to rush in the crash fix at the bottom, when testing
showed that the new VM_WARN_ON_ONCE(!folio_mapped(folio)) actually hits.

So I've asked Stephen to drop this mm-unstable commit from -next for
tonight, while we think about what more is needed.

I was disbelieving when I saw the warning, couldn't understand at all.
But a look at get_mctgt_type() shatters my illusion: it's doesn't just
return a page for pte_present(ptent), it goes off looking up swap
cache and page cache; plus I've no idea whether an MC_TARGET_DEVICE
page would appear as folio_mapped() or not.

Does that mean that we just have to reinstate the folio_mapped() checks
in mm/memcontrol.c i.e. revert all mm/memcontrol.c changes from the
commit? Or does it invalidate the whole project to remove
lock_page_memcg() from mm/rmap.c?

Too disappointed to think about it more tonight :-(
Hugh


> ---
> mm/memcontrol.c | 35 ++++++++++++++++++++---------------
> mm/rmap.c | 12 ------------
> 2 files changed, 20 insertions(+), 27 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 23750cec0036..52b86ca7a78e 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -5676,7 +5676,10 @@ static struct page *mc_handle_file_pte(struct vm_area_struct *vma,
> * @from: mem_cgroup which the page is moved from.
> * @to: mem_cgroup which the page is moved to. @from != @to.
> *
> - * The caller must make sure the page is not on LRU (isolate_page() is useful.)
> + * This function acquires folio_lock() and folio_lock_memcg(). The
> + * caller must exclude all other possible ways of accessing
> + * page->memcg, such as LRU isolation (to lock out isolation) and
> + * having the page mapped and pte-locked (to lock out rmap).
> *
> * This function doesn't do "charge" to new cgroup and doesn't do "uncharge"
> * from old cgroup.
> @@ -5696,6 +5699,13 @@ static int mem_cgroup_move_account(struct page *page,
> VM_BUG_ON_FOLIO(folio_test_lru(folio), folio);
> VM_BUG_ON(compound && !folio_test_large(folio));
>
> + /*
> + * We're only moving pages mapped into the moving process's
> + * page tables. The caller's pte lock prevents rmap from
> + * removing the NR_x_MAPPED state while we transfer it.
> + */
> + VM_WARN_ON_ONCE(!folio_mapped(folio));
> +
> /*
> * Prevent mem_cgroup_migrate() from looking at
> * page's memory cgroup of its source page while we change it.
> @@ -5715,30 +5725,25 @@ static int mem_cgroup_move_account(struct page *page,
> folio_memcg_lock(folio);
>
> if (folio_test_anon(folio)) {
> - if (folio_mapped(folio)) {
> - __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
> - __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
> - if (folio_test_transhuge(folio)) {
> - __mod_lruvec_state(from_vec, NR_ANON_THPS,
> - -nr_pages);
> - __mod_lruvec_state(to_vec, NR_ANON_THPS,
> - nr_pages);
> - }
> + __mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
> + __mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
> +
> + if (folio_test_transhuge(folio)) {
> + __mod_lruvec_state(from_vec, NR_ANON_THPS, -nr_pages);
> + __mod_lruvec_state(to_vec, NR_ANON_THPS, nr_pages);
> }
> } else {
> __mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
> __mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
>
> + __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
> + __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
> +
> if (folio_test_swapbacked(folio)) {
> __mod_lruvec_state(from_vec, NR_SHMEM, -nr_pages);
> __mod_lruvec_state(to_vec, NR_SHMEM, nr_pages);
> }
>
> - if (folio_mapped(folio)) {
> - __mod_lruvec_state(from_vec, NR_FILE_MAPPED, -nr_pages);
> - __mod_lruvec_state(to_vec, NR_FILE_MAPPED, nr_pages);
> - }
> -
> if (folio_test_dirty(folio)) {
> struct address_space *mapping = folio_mapping(folio);
>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 459dc1c44d8a..11a4894158db 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1222,9 +1222,6 @@ void page_add_anon_rmap(struct page *page,
> bool compound = flags & RMAP_COMPOUND;
> bool first = true;
>
> - if (unlikely(PageKsm(page)))
> - lock_page_memcg(page);
> -
> /* Is page being mapped by PTE? Is this its first map to be added? */
> if (likely(!compound)) {
> first = atomic_inc_and_test(&page->_mapcount);
> @@ -1254,9 +1251,6 @@ void page_add_anon_rmap(struct page *page,
> if (nr)
> __mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
>
> - if (unlikely(PageKsm(page)))
> - unlock_page_memcg(page);
> -
> /* address might be in next vma when migration races vma_adjust */
> else if (first)
> __page_set_anon_rmap(page, vma, address,
> @@ -1321,7 +1315,6 @@ void page_add_file_rmap(struct page *page,
> bool first;
>
> VM_BUG_ON_PAGE(compound && !PageTransHuge(page), page);
> - lock_page_memcg(page);
>
> /* Is page being mapped by PTE? Is this its first map to be added? */
> if (likely(!compound)) {
> @@ -1349,7 +1342,6 @@ void page_add_file_rmap(struct page *page,
> NR_SHMEM_PMDMAPPED : NR_FILE_PMDMAPPED, nr_pmdmapped);
> if (nr)
> __mod_lruvec_page_state(page, NR_FILE_MAPPED, nr);
> - unlock_page_memcg(page);
>
> mlock_vma_page(page, vma, compound);
> }
> @@ -1378,8 +1370,6 @@ void page_remove_rmap(struct page *page,
> return;
> }
>
> - lock_page_memcg(page);
> -
> /* Is page being unmapped by PTE? Is this its last map to be removed? */
> if (likely(!compound)) {
> last = atomic_add_negative(-1, &page->_mapcount);
> @@ -1427,8 +1417,6 @@ void page_remove_rmap(struct page *page,
> * and remember that it's only reliable while mapped.
> */
>
> - unlock_page_memcg(page);
> -
> munlock_vma_page(page, vma, compound);
> }
>
> --
> 2.38.1

[PATCH] mm: remove lock_page_memcg() from rmap - fix

Blame me for the hidden "else", which now does the wrong thing, leaving
page's anon_vma unset, then VM_BUG_ON before do_swap_page's set_pte_at.

Signed-off-by: Hugh Dickins <hughd@xxxxxxxxxx>
---
mm/rmap.c | 15 ++++++++-------
1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/mm/rmap.c b/mm/rmap.c
index 11a4894158db..5a8d27fdc644 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1251,13 +1251,14 @@ void page_add_anon_rmap(struct page *page,
if (nr)
__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);

- /* address might be in next vma when migration races vma_adjust */
- else if (first)
- __page_set_anon_rmap(page, vma, address,
- !!(flags & RMAP_EXCLUSIVE));
- else
- __page_check_anon_rmap(page, vma, address);
-
+ if (!PageKsm(page)) {
+ /* address may be in next vma if migration races vma_adjust */
+ if (first)
+ __page_set_anon_rmap(page, vma, address,
+ !!(flags & RMAP_EXCLUSIVE));
+ else
+ __page_check_anon_rmap(page, vma, address);
+ }
mlock_vma_page(page, vma, compound);
}

--
2.35.3