Re: [PATCH] mm: hugetlb: kill set_huge_swap_pte_at()

From: Qi Zheng
Date: Mon Jun 27 2022 - 23:34:45 EST




On 2022/6/27 22:41, Matthew Wilcox wrote:
On Sun, Jun 26, 2022 at 10:57:17PM +0800, Qi Zheng wrote:
The commit e5251fd43007 ("mm/hugetlb: introduce set_huge_swap_pte_at()
helper") add set_huge_swap_pte_at() to handle swap entries on
architectures that support hugepages consisting of contiguous ptes.
And currently the set_huge_swap_pte_at() is only overridden by arm64.

Bleh. I hate the way we handle these currently.

+static inline struct folio *hugetlb_swap_entry_to_folio(swp_entry_t entry)
+{
+ VM_BUG_ON(!is_migration_entry(entry) && !is_hwpoison_entry(entry));
+
+ return page_folio(pfn_to_page(swp_offset(entry)));
+}

We haven't needed a pfn_to_folio() yet, but perhaps we should have one?

Hi,

IMO, it would be better to have a pfn_to_folio(), which can save the
redundant page_folio() call in the current case.

But this is not related to the current patch, maybe it can be a
separate optimization patch.


Related, how should we store migration entries for multi-order folios
in the page tables? We can either encode the individual page in
question, or we can encode the folio. Do we need to support folios
being mapped askew (ie unaligned), or will folios always be mapped
aligned?

Do we currently have a scenario where we need to use skew mapped folios?
Maybe it can be used in pte-mapped THP? Hmm, I have no idea.


+ if (!pte_present(pte)) {
+ struct folio *folio;
+
+ folio = hugetlb_swap_entry_to_folio(pte_to_swp_entry(pte));
+ ncontig = num_contig_ptes(folio_size(folio), &pgsize);
+
+ for (i = 0; i < ncontig; i++, ptep++)
+ set_pte_at(mm, addr, ptep, pte);
+ return;
+ }

It seems like a shame to calculate folio_size() only to turn it into a
number of pages. Don't you want to just use:

ncontig = folio_nr_pages(folio);

We can't use folio_nr_pages() here, because for PMD_SIZE we only need
one entry instead of the PTRS_PER_PTE entries returned by
folio_nr_pages().

Thanks,
Qi



--
Thanks,
Qi