Re: [PATCH v9 1/2] ring-buffer: Introducing ring-buffer mapping functions

From: Steven Rostedt
Date: Thu Dec 21 2023 - 12:57:28 EST


On Thu, 21 Dec 2023 17:35:22 +0000
Vincent Donnefort <vdonnefort@xxxxxxxxxx> wrote:

> @@ -5999,6 +6078,307 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
> }
> EXPORT_SYMBOL_GPL(ring_buffer_subbuf_order_set);
>

The kernel developers have agreed to allow loop variables to be declared in
loops. This will simplify these macros:



> +#define subbuf_page(off, start) \
> + virt_to_page((void *)(start + (off << PAGE_SHIFT)))
> +
> +#define foreach_subbuf_page(off, sub_order, start, page) \
> + for (off = 0, page = subbuf_page(0, start); \
> + off < (1 << sub_order); \
> + off++, page = subbuf_page(off, start))

#define foreach_subbuf_page(sub_order, start, page) \
for (int __off = 0, page = subbuf_page(0, (start)); \
__off < (1 << (sub_order)); \
__off++, page = subbuf_page(__off, (start)))

And parameters as r-values should always be wrapped in parenthesis.

> +
> +static inline void subbuf_map_prepare(unsigned long subbuf_start, int order)
> +{
> + struct page *page;
> + int subbuf_off;

Then you can remove the "subbuf_off".

> +
> + /*
> + * When allocating order > 0 pages, only the first struct page has a
> + * refcount > 1. Increasing the refcount here ensures none of the struct
> + * page composing the sub-buffer is freeed when the mapping is closed.

Nice catch by the way ;-)

-- Steve

> + */
> + foreach_subbuf_page(subbuf_off, order, subbuf_start, page)
> + page_ref_inc(page);
> +}
> +
> +static inline void subbuf_unmap(unsigned long subbuf_start, int order)
> +{
> + struct page *page;
> + int subbuf_off;
> +
> + foreach_subbuf_page(subbuf_off, order, subbuf_start, page) {
> + page_ref_dec(page);
> + page->mapping = NULL;
> + }
> +}
> +