Re: [PATCH] O_STREAMING - flag for optimal streaming I/O

From: Giuliano Pochini (pochini@shiny.it)
Date: Thu Oct 10 2002 - 08:17:35 EST


On 10-Oct-2002 Helge Hafting wrote:
> Giuliano Pochini wrote:
>
>> Yes, it makes sense, but it's useless or harmful to discard caches
>> if nobody else needs memory. You just lose data that may be
>> requested in the future for no reason.
>
> Sure, so the ideal is to not drop unconditionally, but
> make sure that the "finished" O_STREAMING pages are
> the very first ones to go whenever memory pressure happens.
>
> The question then becomes "can you do that, with no more
> overhead or code complexity than the existing stuff?"

I don't know enough of linux internals to suggest anything
really useful. Perhaps something like: "drop pages which
weren't already loaded when we requested them" might be
enough to prevent cached stuff used by other tasks to be
removed from memory.

> It wouldn't necessarily make much difference, because
> a linux machine is almost always under memory pressure.
> Free memory is simply filled up with cache till there
> is no more left. From then on, all requests for memory
> are handled by throwing something else out of cache
> or into swap. In that case the streaming pages
> are evicted quickly anyway, and the ideal case
> is no different from the implemented case.

O_STREAMING is a way to reduce cache footprint of some
files, ad it does the job very well, unless those files
are accessed concurrently by two of more processes.
Thing again about to backup a large database. I don't
want to use tar because it kills the caches. I would
like a way to read the db so that the cached part of
the db (the 20% which gets 80% of accesses) is not
expunged.

Bye.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Tue Oct 15 2002 - 22:00:37 EST