Re: [v4 1/1] mm: Adaptive hash table scaling

From: Pasha Tatashin
Date: Sun May 21 2017 - 08:59:04 EST


Hi Andi,

Thank you for looking at this. I mentioned earlier, I would not want to impose a cap. However, if you think that for example dcache needs a cap, there is already a mechanism for that via high_limit argument, so the client can be changed to provide that cap. However, this particular patch addresses scaling problem for everyone by making it scale with memory at a slower pace.

Thank you,
Pasha

On 05/20/2017 10:07 PM, Andi Kleen wrote:
Pavel Tatashin <pasha.tatashin@xxxxxxxxxx> writes:

Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
is provided every time memory quadruples the sizes of hash tables will only
double instead of quadrupling as well. This algorithm starts working only
when memory size reaches a certain point, currently set to 64G.

This is example of dentry hash table size, before and after four various
memory configurations:

IMHO the scale is still too aggressive. I find it very unlikely
that a 1TB machine really needs 256MB of hash table because
number of used files are unlikely to directly scale with memory.

Perhaps should just cap it at some large size, e.g. 32M

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxxx For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>