Re: large files unnecessary trashing filesystem cache?

From: Guido Fiala
Date: Wed Oct 19 2005 - 13:54:29 EST


On Wednesday 19 October 2005 20:43, Kyle Moffett wrote:
> On Oct 19, 2005, at 13:58:37, Guido Fiala wrote:
> > Kernel could do the best to optimize default performance,
> > applications that consider their own optimal behaviour should do
> > so, all other files are kept under default heuristic policy
> > (adaptable, configurable one)
> >
> > Heuristic can be based on access statistic:
> >
> > streaming/sequential can be guessed by getting exactly 100% cache
> > hit rate (drop behind pages immediately),
>
> What about a grep through my kernel sources or other linear search
> through a large directory tree? That would get exactly 100% cache
> hit rate which would cause your method to drop the pages immediately,
> meaning that subsequent greps are equally slow. I have enough memory
> to hold a couple kernel trees and I want my grepping to push OO.org
> out of RAM for a bit while I do my kernel development.

Ok, it seems this was thought to easy, needs some thinking ;-)
Of course i have lots of similar work and dont like to loose the speedup
either.

What other useful data for the job do we have already in the structs?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/