Re: [RFC PATCH] LTTng relay buffer allocation, read, write

From: Peter Zijlstra
Date: Sat Sep 27 2008 - 13:14:39 EST


On Sat, 2008-09-27 at 09:40 -0400, Mathieu Desnoyers wrote:
> As I told Martin, I was thinking about taking an axe and moving stuff around in
> relay. Which I just did.
>
> This patch reimplements relay with a linked list of pages. Provides read/write
> wrappers which should be used to read or write from the buffers. It's the core
> of a layered approach to the design requirements expressed by Martin and
> discussed earlier.
>
> It does not provide _any_ sort of locking on buffer data. Locking should be done
> by the caller. Given that we might think of very lightweight locking schemes, it
> makes sense to me that the underlying buffering infrastructure supports event
> records larger than 1 page.
>
> A cache saving 3 pointers is used to keep track of current page used for the
> buffer for write, read and current subbuffer header pointer lookup. The offset
> of each page within the buffer is saved in the page frame structure to permit
> cache lookup without extra locking.
>
> TODO : Currently, no splice file operations are implemented. Should come soon.
> The idea is to splice the buffers directly into files or to the network.
>
> This patch is released as early RFC. It builds, that's about it. Testing will
> come when I implement the splice ops.

Why? What aspects of Steve's ring-buffer interface will hinder us
optimizing the implementation to be as light-weight as you like?

The thing is, I'd like to see it that light as well ;-)

As for the page-spanning entries, I think we can do that with Steve's
system just fine, its just that Linus said its a dumb idea and Steve
dropped it, but there is nothing fundamental stopping us from recording
a length that is > PAGE_SIZE and copying data into the pages one at a
time.

Nor do I see it impossible to implement splice on top of Steve's
ring-buffer..

So again, why?

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/