Re: large files unnecessary trashing filesystem cache?

From: Ingo Oeser
Date: Wed Oct 19 2005 - 10:54:41 EST


Hi,

On Wednesday 19 October 2005 13:10, gfiala@xxxxxxxxxx wrote:
> Zitat von Andrew Morton <akpm@xxxxxxxx>:
> > So I'd also suggest a new resource limit which, if set, is copied into the
> > applications's file_structs on open(). So you then write a little wrapper
> > app which does setrlimit()+exec():
> >
> > limit-cache-usage -s 1000 my-fave-backup-program <args>
> >
> > Which will cause every file which my-fave-backup-program reads or writes to
> > be limited to a maximum pagecache residency of 1000 kbytes.
>
> Or make it another 'ulimit' parameter...

Which is already there: There is an ulimit for "maximum RSS",
which is at least a superset of "maximum pagecache residency".

This is already settable and known by many admins. But AFAIR it is not
honoured by the kernel completely, right?

But per file is a much better choice, since this would allow
concurrent streaming. This is needed to implement timeshifting at least[1].

So either I miss something or this is no proper solution yet.


Regards

Ingo Oeser

[1] Which is obviously done by some kind of on-disk FIFO.

Attachment: pgp00000.pgp
Description: PGP signature