Re: [PATCH v2 5/8] mm/gup: Accelerate thp gup even for "pages != NULL"

From: Peter Xu
Date: Tue Jun 20 2023 - 16:14:05 EST


On Tue, Jun 20, 2023 at 08:02:41PM +0200, David Hildenbrand wrote:
> Thinking about why we have the flush_anon_page/flush_dcache_page stuff here
> and not in GUP-fast ... I suspect that all GUP-fast archs don't need that
> stuff.

Yeah that's a bit confusing, and I sincerely don't know the answer. Though
here I had the other side of the feeling - I feel like gup-fast should also
do it.. but maybe it's just get missed.

AFAIU the idea was that the data can be mis-aligned between user / kernel,
and if it's needed on slow gup I don't see why it's not needed in fast..

There're still a few archs that implemented flush_dcache_page() but
meanwhile has HAVE_FAST_GUP selected, like arm/arm64/powerpc.

It's just getting out of scope of what this series wanted to achieve.

> I was wondering if there are some possible races with the flush_anon_page()
> / flush_dcache_page() on a page that might have been unmapped in the
> meantime (as we dropped the PT lock ...).
>
> Some flush_dcache_page() implementations do some IMHO confusing
> page_mapcount() things (like in arch/arc/mm/cache.c). But maybe the unmap
> code handles that as well ... and most likely these archs don't support THP.

Maybe true.

It seems that the page_mapcount() was mostly used to identify whether a
page is mapped in the userspace address space, if so I'd worry less because
the only race possible here, iiuc, is when the user unmaps the page
concurrently (and since we got it from gup it must have been mapped once).

Then I would assume the caller should be prepared for that, and the
flush_dcache_page() won't matter too much in this case I assume, if the
userspace dropped all the data anyway - the whole page* can already be
invalid for that VA for a completed unmap.

>
> Anyhow, just a note that the flush_anon_page/flush_dcache_page left me
> confused.

I share the same confusion. Hopefully, what this series did here was not
changing that, at least not making it worse.

--
Peter Xu