Re: [RFC 0/6] mm: improve page allocator scalability via splitting zones

From: David Hildenbrand
Date: Tue May 16 2023 - 06:31:27 EST


On 16.05.23 11:38, Huang, Ying wrote:
Michal Hocko <mhocko@xxxxxxxx> writes:

On Fri 12-05-23 10:55:21, Huang, Ying wrote:
Hi, Michal,

Thanks for comments!

Michal Hocko <mhocko@xxxxxxxx> writes:

On Thu 11-05-23 14:56:01, Huang Ying wrote:
The patchset is based on upstream v6.3.

More and more cores are put in one physical CPU (usually one NUMA node
too). In 2023, one high-end server CPU has 56, 64, or more cores.
Even more cores per physical CPU are planned for future CPUs. While
all cores in one physical CPU will contend for the page allocation on
one zone in most cases. This causes heavy zone lock contention in
some workloads. And the situation will become worse and worse in the
future.

For example, on an 2-socket Intel server machine with 224 logical
CPUs, if the kernel is built with `make -j224`, the zone lock
contention cycles% can reach up to about 12.7%.

To improve the scalability of the page allocation, in this series, we
will create one zone instance for each about 256 GB memory of a zone
type generally. That is, one large zone type will be split into
multiple zone instances. Then, different logical CPUs will prefer
different zone instances based on the logical CPU No. So the total
number of logical CPUs contend on one zone will be reduced. Thus the
scalability is improved.

It is not really clear to me why you need a new zone for all this rather
than partition free lists internally within the zone? Essentially to
increase the current two level system to 3: per cpu caches, per cpu
arenas and global fallback.

Sorry, I didn't get your idea here. What is per cpu arenas? What's the
difference between it and per cpu caches (PCP)?

Sorry, I didn't give this much thought than the above. Essentially, we
have 2 level system right now. Pcp caches should reduce the contention
on the per cpu level and that should work reasonably well, if you manage
to align batch sizes to the workload AFAIK. If this is not sufficient
then why to add the full zone rather than to add another level that
caches across a larger than a cpu unit. Maybe a core?

This might be a wrong way around going for this but there is not much
performance analysis about the source of the lock contention so I am
mostly guessing.

I guess that the page allocation scalability will be improved if we put
more pages in the per CPU caches, or add another level of cache for
multiple logical CPUs. Because more page allocation requirements can be
satisfied without acquiring zone lock.

As other caching system, there are always cases that the caches are
drained and too many requirements goes to underlying slow layer (zone
here). For example, if a workload needs to allocate a huge number of
pages (larger than cache size) in parallel, it will run into zone lock
contention finally. The situation will became worse and worse if we
share one zone with more and more logical CPUs. Which is the trend in
industry now. Per my understanding, we can observe the high zone lock
contention cycles in kbuild test because of that.

So, per my understanding, to improve the page allocation scalability in
bad situations (that is, caching doesn't work well enough), we need to
restrict the number of logical CPUs that share one zone. This series is
an attempt for that. Better caching can increase the good situations
and reduce the bad situations. But it seems hard to eliminate all bad
situations.

From another perspective, we don't install more and more memory for each
logical CPU. This makes it hard to enlarge the default per-CPU cache
size.

I am also missing some information why pcp caches tunning is not
sufficient.

PCP does improve the page allocation scalability greatly! But it
doesn't help much for workloads that allocating pages on one CPU and
free them in different CPUs. PCP tuning can improve the page allocation
scalability for a workload greatly. But it's not trivial to find the
best tuning parameters for various workloads and workload run time
statuses (workloads may have different loads and memory requirements at
different time). And we may run different workloads on different
logical CPUs of the system. This also makes it hard to find the best
PCP tuning globally.

Yes this makes sense. Does that mean that the global pcp tuning is not
keeping up and we need to be able to do more auto-tuning on local bases
rather than global?

Similar as above, I think that PCP helps the good situations performance
greatly, and splitting zone can help the bad situations scalability.
They are working at the different levels.

As for PCP auto-tuning, I think that it's hard to implement it to
resolve all problems (that is, makes PCP never be drained).

And auto-tuning doesn't sound easy. Do you have some idea of how to do
that?

If we could avoid instantiating more zones and rather improve existing mechanisms (PCP), that would be much more preferred IMHO. I'm sure it's not easy, but that shouldn't stop us from trying ;)

I did not look into the details of this proposal, but seeing the change in include/linux/page-flags-layout.h scares me. Further, I'm not so sure how that change really interacts with hot(un)plug of memory ... on a quick glimpse I feel like this series hacks the code such that such that the split works based on the boot memory size ...

I agree with Michal that looking into auto-tuning PCP would be preferred. If that can't be done, adding another layer might end up cleaner and eventually cover more use cases.

[I recall there was once a proposal to add a 3rd layer to limit fragmenation to individual memory blocks; but the granularity was rather small and there were also some concerns that I don't recall anymore]

--
Thanks,

David / dhildenb