Re: large files unnecessary trashing filesystem cache?

From: Guido Fiala
Date: Wed Oct 19 2005 - 13:00:58 EST


On Wednesday 19 October 2005 06:10, Lee Revell wrote:
> On Tue, 2005-10-18 at 22:01 +0200, Guido Fiala wrote:
> > Of course one could always implement f_advise-calls in all
> > applications
>
> Um, this seems like the obvious answer. The application doing the read
> KNOWS it's a streaming read, while the best the kernel can do is guess.
>
> You don't really make much of a case that fadvise can't do the job.
>

Kernel could do the best to optimize default performance, applications that
consider their own optimal behaviour should do so, all other files are kept
under default heuristic policy (adaptable, configurable one)

Heuristic can be based on access statistic:

streaming/sequential can be guessed by getting exactly 100% cache hit rate
(drop behind pages immediately),

random access/repeated reads can be guessed by >100% hit rate (keep as much in
memory as possible).

Less than 100% hit rate is already handled sanely i guess by reducing
readahead, precognition would gather access patterns (every n-th block is
read so readahead every n-th block, unlikely scenario i guess, but might
happen in databases).

How about backward-read-files? Others?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/