Re: [PATCH] mm: migrate: Fix THP's mapcount on isolation

From: Hugh Dickins
Date: Wed Nov 23 2022 - 00:14:17 EST


On Wed, 23 Nov 2022, Gavin Shan wrote:

> The issue is reported when removing memory through virtio_mem device.
> The transparent huge page, experienced copy-on-write fault, is wrongly
> regarded as pinned. The transparent huge page is escaped from being
> isolated in isolate_migratepages_block(). The transparent huge page
> can't be migrated and the corresponding memory block can't be put
> into offline state.
>
> Fix it by replacing page_mapcount() with total_mapcount(). With this,
> the transparent huge page can be isolated and migrated, and the memory
> block can be put into offline state.
>
> Fixes: 3917c80280c9 ("thp: change CoW semantics for anon-THP")
> Cc: stable@xxxxxxxxxxxxxxx # v5.8+
> Reported-by: Zhenyu Zhang <zhenyzha@xxxxxxxxxx>
> Suggested-by: David Hildenbrand <david@xxxxxxxxxx>
> Signed-off-by: Gavin Shan <gshan@xxxxxxxxxx>

Interesting, good catch, looked right to me: except for the Fixes line
and mention of v5.8. That CoW change may have added a case which easily
demonstrates the problem, but it would have been the wrong test on a THP
for long before then - but only in v5.7 were compound pages allowed
through at all to reach that test, so I think it should be

Fixes: 1da2f328fa64 ("mm,thp,compaction,cma: allow THP migration for CMA allocations")
Cc: stable@xxxxxxxxxxxxxxx # v5.7+

Oh, no, stop: this is not so easy, even in the latest tree.

Because at the time of that "admittedly racy check", we have no hold
at all on the page in question: and if it's PageLRU or PageCompound
at one instant, it may be different the next instant. Which leaves it
vulnerable to whatever BUG_ON()s there may be in the total_mapcount()
path - needs research. *Perhaps* there are no more BUG_ON()s in the
total_mapcount() path than in the existing page_mapcount() path.

I suspect that for this to be safe (before your patch and more so after),
it will be necessary to shift the "admittedly racy check" down after the
get_page_unless_zero() (and check the sequence of operations when a
compound page is initialized).

The races I'm talking about are much much rarer than the condition you
are trying to avoid, so it's frustrating; but such races are real,
and increasing stable's exposure to them is not so good.

Sorry, I'm going to run away now: just raising these concerns
without working on the solution.

Hugh

> ---
> mm/compaction.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index c51f7f545afe..c408b5e04c1d 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -990,7 +990,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * admittedly racy check.
> */
> mapping = page_mapping(page);
> - if (!mapping && page_count(page) > page_mapcount(page))
> + if (!mapping && page_count(page) > total_mapcount(page))
> goto isolate_fail;
>
> /*
> --
> 2.23.0