On Tue, Oct 08, 2002 at 01:03:11PM -0700, Andrew Morton wrote:
> Matthias Schniedermeyer wrote:
> > ...
> > I only have 3 GB of RAM, and creating and writing trashes the whole
> > cache twice.
> That's actually something completely dumb and irritating which
> Linux has done for ever ;)
> What we need is to detect the situation where someone is linearly
> walking through a file which is preposterously too large to cache,
> and just start dropping it.
> It's not hard to implement the lower machinery to do that - it would
> basically be an internal call to posix_fadvise(), which we don't
> have but could and perhaps should...
> The tricky part is designing the algorithm which decides when to
> pull the trigger.
I have more of this cases.
I use a program called VDR. This is for recording digital-TV-program
After a recording is finished i cut the recordings. I my case i "stream"
the input-data via NFS from the recording machine(s) through a converter
into the local temporary directory. After i have enough files i create
ISO-images of the files. When i create an ISO-images i "stream" the
files from HDD1 to HDD2 because otherwise it would completly kill the
performance. Then i burn the ISO-Image onto a DVD-R.
Every single part in the whole process trashes the cache.
-- Real Programmers consider "what you see is what you get" to be just as bad a concept in Text Editors as it is in women. No, the Real Programmer wants a "you asked for it, you got it" text editor -- complicated, cryptic, powerful, unforgiving, dangerous.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to email@example.com More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Tue Oct 15 2002 - 22:00:27 EST