Re: large files unnecessary trashing filesystem cache?

From: Badari Pulavarty
Date: Tue Oct 18 2005 - 15:48:57 EST


On Tue, 2005-10-18 at 22:01 +0200, Guido Fiala wrote:
> (please note, i'am not subscribed to the list, please CC me on reply)
>
> Story:
> Once in while we have a discussion at the vdr (video disk recorder) mailing
> list about very large files trashing the filesystems memory cache leading to
> unnecessary delays accessing directory contents no longer cached.
>
> This program and certainly all applications that deal with very large files
> only read once (much larger than usual memory) - it happens that all other
> cached blocks of the filessystem are removed from memory solely to keep as
> much as possible of that file in memory, which seems to be a bad strategy in
> most situations.
>
> Of course one could always implement f_advise-calls in all applications, but i
> suggest a discussion if a maximum (configurable) in-memory-cache on a
> per-file base should be implemented in linux/mm or where this belongs.
>
> My guess was, it has something to do with mm/readahead.c, a test limiting the
> result of the function "max_sane_readahead(...) to 8 MBytes as a quick and
> dirty test did not solve the issue, but i might have done something wrong.
>
> I've searched the archive but could not find a previous discussion - is this a
> new idea?
>
> It would be interesting to discuss if and when this proposed feature could
> lead to better performance or has any unwanted side effects.
>
> Thanks for ideas on that issue.

Is there a reason why those applications couldn't use O_DIRECT ?

Thanks,
Badari

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/