Re: [patch 0/9] mm: thrash detection-based file cache sizing v6

From: Johannes Weiner
Date: Tue Nov 26 2013 - 17:31:29 EST


On Mon, Nov 25, 2013 at 04:57:29PM -0800, Andrew Morton wrote:
> On Sun, 24 Nov 2013 18:38:19 -0500 Johannes Weiner <hannes@xxxxxxxxxxx> wrote:
>
> > This series solves the problem by maintaining a history of pages
> > evicted from the inactive list, enabling the VM to detect frequently
> > used pages regardless of inactive list size and facilitate working set
> > transitions.
>
> It's a very readable patchset - thanks for taking the time to do that.

Thanks.

> > 31 files changed, 1253 insertions(+), 401 deletions(-)
>
> It's also a *ton* of stuff. More code complexity, larger kernel data
> structures. All to address a quite narrow class of workloads on a
> relatively small window of machine sizes. How on earth do we decide
> whether it's worth doing?

The fileserver-type workload is not that unusual and not really
restricted to certain machine sizes.

But more importantly, these are reasonable workloads for which our
cache management fails completely, and we have no alternative solution
to offer. What do we tell the people running these loads?

> Also, what's the memcg angle? This is presently a global thing - do
> you think we're likely to want to make it per-memcg in the future?

Yes, it seemed easier to get the global case working first, but the
whole thing is designed with memcg in mind. We can encode the unique
cgroup ID in the shadow entries as well and make the inactive_age per
lruvec instead of per-zone.

If space gets tight in the shadow entry (on 32 bit e.g.), instead of
counting every single eviction, we can group evictions into
generations of bigger chunks - the more memory, the less accurate the
refault distance has to be anyway - and can then get away with fewer
bits for the eviction timestamp.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/