Re: [PATCH v3 1/2] khugepaged: enable collapse pmd for pte-mapped THP

From: Oleg Nesterov
Date: Fri Aug 02 2019 - 12:21:41 EST


On 08/01, Song Liu wrote:
>
> +static int khugepaged_add_pte_mapped_thp(struct mm_struct *mm,
> + unsigned long addr)
> +{
> + struct mm_slot *mm_slot;
> + int ret = 0;
> +
> + /* hold mmap_sem for khugepaged_test_exit() */
> + VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);
> + VM_BUG_ON(addr & ~HPAGE_PMD_MASK);
> +
> + if (unlikely(khugepaged_test_exit(mm)))
> + return 0;
> +
> + if (!test_bit(MMF_VM_HUGEPAGE, &mm->flags) &&
> + !test_bit(MMF_DISABLE_THP, &mm->flags)) {
> + ret = __khugepaged_enter(mm);
> + if (ret)
> + return ret;
> + }

see my reply to v2

> +void collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long haddr)
> +{
> + struct vm_area_struct *vma = find_vma(mm, haddr);
> + pmd_t *pmd = mm_find_pmd(mm, haddr);
> + struct page *hpage = NULL;
> + unsigned long addr;
> + spinlock_t *ptl;
> + int count = 0;
> + pmd_t _pmd;
> + int i;
> +
> + VM_BUG_ON(haddr & ~HPAGE_PMD_MASK);
> +
> + if (!vma || !vma->vm_file || !pmd)
^^^^^^^^^^^^^

I am not sure this is enough,

> + return;
> +
> + /* step 1: check all mapped PTEs are to the right huge page */
> + for (i = 0, addr = haddr; i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE) {
> + pte_t *pte = pte_offset_map(pmd, addr);
> + struct page *page;
> +
> + if (pte_none(*pte))
> + continue;
> +
> + page = vm_normal_page(vma, addr, *pte);

Why can't vm_normal_page() return NULL? Again, we do not if this vm_file
is the same shmem_file() or something else.

And in fact I don't think it is safe to use vm_normal_page(vma, addr)
unless you know that vma includes this addr.

to be honest, I am not even sure that unconditional mm_find_pmd() is safe
if this "something else" is really special.

Oleg.