Re: Compaction & folios

From: Matthew Wilcox
Date: Wed Oct 06 2021 - 19:18:14 EST


On Wed, Oct 06, 2021 at 06:53:41PM -0400, Kent Overstreet wrote:
> It may turn out that allocating hugepages still doesn't work as reliably as we'd
> like - but folios are still a big help even when we can't allocate a 2MB page,
> because we'll be able to fall back to an order 6 or 7 or 8 allocation, which is
> something we can't do now. And, since multiple CPU vendors now support
> coalescing contiguous PTE entries in the TLB, this will still get us most of the
> performance benefits of using hugepages.

I'd like to add two things:

1. A lot of people talk about the performance improvements from using
2MB pages, and there are the obvious hardware ones -- one fewer level
to dereference in the page table walk when there's a TLB miss; using a
single TLB entry to cache an entire 2MB page.

But there are the software ones, which I believe Google have measured
(perhaps it was the ChromeOS team?) Allocating order-2/3/4 pages reduces
the length of the LRU list by a factor of 4/8/16. That means we get 4-16x
memory reclaimed per unit of time, which reduces the LRU lock contention.
Not to mention the advantage of being able to use a pagevec to describe
960KB of memory rather than 60KB.

2. We can only measure what CPUs do today. If our behaviour changes,
CPU vendors will adapt. I talked to someone who dabbles in hardware
design who said that it really isn't that hard to design a TLB that
can support mapping 64KB entries at arbitrary 4KB offsets. There's no
particular incentive for CPU manufacturers to do that today, but if we
start allocating 64KB pages to cache files, that will change.