Re: [PATCH v2 4/5] mm: Refector release_pages()

From: Ryan Roberts
Date: Thu Aug 31 2023 - 15:58:22 EST


On 30/08/2023 20:11, Matthew Wilcox wrote:
> On Wed, Aug 30, 2023 at 10:50:10AM +0100, Ryan Roberts wrote:
>> In preparation for implementing folios_put_refs() in the next patch,
>> refactor release_pages() into a set of helper functions, which can be
>> reused. The primary difference between release_pages() and
>> folios_put_refs() is how they iterate over the set of folios. The
>> per-folio actions are identical.
>
> As you noted, we have colliding patchsets. I'm not hugely happy with
> how patch 4 turned out,

Could you describe the issues as you see them? I'm keen not to repeat the same
bad patterns in future.

so I thought I'd send some addendum patches to
> my RFC series that implement pfn_range_put() (maybe should have been
> pfn_ranges_put()?) on top of my patch series. I think it's a bit nicer,
> but not quite as nice as it could be.
>
> I'm thinking about doing ...
>
> void release_unref_folios(struct folio_batch *folios)
> {
> struct lruvec *lruvec = NULL;
> unsigned long flags = 0;
> int i;
>
> for (i = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
> free_swap_cache(folio);

Agree this can't go here - would put it in pfn_range_put(). But would not want
it in folios_put() as you suggeted in the other email - that would surely change
the behaviour of folios_put()?

> __page_cache_release(folio, &lruvec, &flags);
> }

I think you would need to add:

if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);

> mem_cgroup_uncharge_folios(folios);
> free_unref_folios(folios);
> }
>
> then this becomes:
>
> void folios_put(struct folio_batch *folios)
> {
> int i, j;
>
> for (i = 0, j = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
>
> if (is_huge_zero_page(&folio->page))
> continue;
> if (folio_is_zone_device(folio)) {
> if (put_devmap_managed_page(&folio->page))
> continue;
> if (folio_put_testzero(folio))
> free_zone_device_page(&folio->page);
> continue;
> }
>
> if (!folio_put_testzero(folio))
> continue;
> if (folio_test_hugetlb(folio)) {
> free_huge_folio(folio);
> continue;
> }
>
> if (j != i)
> folios->folios[j] = folio;
> j++;
> }
>
> folios->nr = j;
> if (!j)
> return;
> release_unref_folios(folios);
> }
>
> and pfn_range_put() also becomes shorter and loses all the lruvec work.

something like this?

void pfn_range_put(struct pfn_range *range, unsigned int nr)
{
struct folio_batch folios;
unsigned int i;

folio_batch_init(&folios);
for (i = 0; i < nr; i++) {
struct folio *folio = pfn_folio(range[i].start);
unsigned int refs = range[i].end - range[i].start;

free_swap_cache(folio); // <<<<< HERE? To be equivalent to
// free_pages_and_swap_cache()

if (is_huge_zero_page(&folio->page))
continue;

if (folio_is_zone_device(folio)) {
if (put_devmap_managed_page_refs(&folio->page, refs))
continue;
if (folio_ref_sub_and_test(folio, refs))
free_zone_device_page(&folio->page);
continue;
}

if (!folio_ref_sub_and_test(folio, refs))
continue;

/* hugetlb has its own memcg */
if (folio_test_hugetlb(folio)) {
free_huge_folio(folio);
continue;
}

if (folio_batch_add(&folios, folio) == 0)
release_unref_folios(&folios);
}

if (folios.nr)
release_unref_folios(&folios);
}

>
> Thoughts?

Looks like its getting there to me. Although the bulk of the logic inside the
loop is still common between folios_put() and pfn_range_put(); perhaps we can
have a common helper for that, which would just need to take (folio, refs)?

So by adding free_swap_cache() above, we can call pfn_range_put() direct and can
remove free_pages_and_swap_cache() entirely.

What's the best way to move forward here? Two options as I see it:

- I drop patch 4 and create a version of pfn_range_put() that has the same
semantic as above but essentially just copies the old release_pages() (similar
to my v1). That keeps my series independent. Then you can replace with the new
pfn_range_put() as part of your series.

- We can just hook my patches up to the end of your series and do it all in one go.

Opinion?

Thanks,
Ryan