Re: [PATCH 0/3 RESEND] Per memcg lru_gen node stat

From: Huan Yang
Date: Wed Oct 18 2023 - 22:33:09 EST



在 2023/10/19 3:59, T.J. Mercier 写道:
[你通常不会收到来自 tjmercier@xxxxxxxxxx 的电子邮件。请访问 https://aka.ms/LearnAboutSenderIdentification,以了解这一点为什么很重要;]

On Wed, Oct 18, 2023 at 9:34 AM Yu Zhao <yuzhao@xxxxxxxxxx> wrote:
On Sun, Oct 8, 2023 at 8:57 PM Huan Yang <link@xxxxxxxx> wrote:
On original global lru_gen node in debugfs, it can all show each memcg's
lru gen info in "lru_gen" or "lru_gen_full", and can type cmd into lru_gen.
But which show info contains all memcg's info, and cmd need to
know memcg's id.

This patchset add lru_gen node in per memcg, with this node, we can
get lru_gen info in each memcg.
Also, we can type cmd to control each memcg's lru_gen seq, but, this node
don't support multi cmd, single memcg just process one cmd once time.
Adding TJ from the Android team. (The other TJ you CC'ed is from the
ChromeOS team.)

This series introduced a new ABI, which has to be maintained forever.
How exactly would it be used in *production*?

Android doesn't officially support memcgs. So I want to understand the
real-world use cases first.
Not sure how Android came up but I'm happy to chat. We want to turn on
memcg v2 for Android but I'm currently working through perf impacts
before that happens. Android can't use debugfs in production, but I
think we'd prefer to use memory.reclaim for eviction anyway because it
respects memcg limits and reclaims from slab.
Yes, shrink control this actually can use proactive reclaim.

So maybe it's possible to add just aging functionality specific to
MGLRU? It'd be nice to know how you're going to use the aging, or why
Due to debugfs not always mount, if we want to now lrugen's info, maybe
nice to offer a memcg's node to show per memcg's lrugen info.
you want this version of eviction instead of what memory.reclaim does.

So, this node not want to instead of memory.reclaim, it's good enough. age or other control just flow debugfs global node's behavior. If no need, delete write is OK.

Thanks