[LSF/MM/BPF TOPIC] DAMON Updates and Plans: Automation of DAMON tuning, tiering, and VM guest scaling

From: SeongJae Park
Date: Mon Jan 29 2024 - 15:47:56 EST


Hi all,


Thanks to the discussions and feedback that we received last year from the
LSF/MM/BPF 2023's DAMON updates and future plans session[1], DAMON made many
advances and yet more future plans. I'd like to again share/discuss the
followup changes and status we made so far after the last year's session, and
the future development plans on LSF/MM/BPF 2024.

A few topics would be shared, including below.

User Aims-oriented DAMOS Self-tuning
------------------------------------

I shared "feedback-based quota auto tuning" as the top priority item for 2023
in last year's session. Fortunately it made some progress. A simple feedback
loop-based quota tuning and user feedback interface have been implemented and
merged into the mainline from v6.8-rc1. By the time of LSF/MM/BPF 2024,
hopefully the remaining part of the initial idea, specifically making DAMON
feeds itself, will be implemented and merged in the mm tree.

I'd like to share the implementation details, discuss remaining rooms for
improvements, and find future extension opportunities. Hopefully DAMON
monitoring parameters auto-tuning would be one of the future works to consider.

DAMON-based Tiered Memory Management
------------------------------------

I'm personally not working on this topic directly, but apparently a few folks
are exploring DAMON usage for tiered memory management. Some academic
papers[2,3] exploring the opportunity have been published. I shared a rough
RFC idea[4] of it based on the self-tuning in November 2023. Recently SK hynix
has also adopted DAMON for their CXL-based tiered memory management
solution[5], and sent patches for their work[6].

I'd like to provide a summary of the ongoing works and updates on my idea.
Hopefully early proof-of-concepts level implementation of my idea would be
shared in the session.

Access/Contiguity-aware Memory Auto-scaling
-------------------------------------------

This is a new idea which was not discussed in last year's LSF/MM/BPF. The main
purpose is to implement an access-aware, efficient, reliable, and simple to use
memory over-subscribed VM guest kernels. Specifically, it will steal guest
memory based on access patterns in a contiguity-aware manner and report those
to the host as free to use. The PSI-based stealing aggressiveness auto-tuning
may be used. It will also apply a 'struct page' overhead reduction mechanism
to the stolen memory. We're currently thinking about memory hotplugging and
vmemmap remapping as candidate mechanisms. For simple usage, the interface
will be similar to that of virtio-balloon, which is widely adopted. The first
version of the more detailed idea[7] has been shared before.

Because it is still at a pure idea level, not much progress until LSF/MM/BPF is
expected. This item would be primarily for the future plans part. That said,
at least a second version of the design will be shared before. Also hopefully
early proof-of-concepts level implementation or some test results will be
shared. Since this is expected to be more for future plans than status update,
I hope to have more discussions for getting design level concerns and possible
collaboration chances.

Misc
----

If time allows, I would be able to cover more follow ups for items that I
shared as the future plans, and the feedback/questions I received from the last
year's session.

- Merging DAMON user-space tool in the tree
- DAMON documentation improvements
- Write-only monitoring
- THP memory footprint reduction
- DAMON-based working set size report
- In production DAMON usages

[1] https://lwn.net/Articles/931769/
[2] https://arxiv.org/abs/2302.09468
[3] https://dl.acm.org/doi/10.1145/3600006.3613167
[4] https://lore.kernel.org/damon/20231112195602.61525-1-sj@xxxxxxxxxx/
[5] https://github.com/skhynix/hmsdk/releases/tag/hmsdk-v2.0
[6] https://lore.kernel.org/r/20240115045253.1775-1-honggyu.kim@xxxxxx
[7] https://lore.kernel.org/damon/20231112195114.61474-1-sj@xxxxxxxxxx/


Thanks,
SJ