Re: [PATCH] ring-buffer: Simplify reservation with try_cmpxchg() loop

From: Mathieu Desnoyers
Date: Fri Jan 19 2024 - 15:56:39 EST


On 2024-01-19 10:37, Steven Rostedt wrote:
On Fri, 19 Jan 2024 09:40:27 -0500
Mathieu Desnoyers <mathieu.desnoyers@xxxxxxxxxxxx> wrote:

On 2024-01-18 18:12, Steven Rostedt wrote:
From: "Steven Rostedt (Google)" <rostedt@xxxxxxxxxxx>


[...]

Although, it does not get rid of the double time stamps (before_stamp and
write_stamp), using cmpxchg() does get rid of the more complex case when
an interrupting event occurs between getting the timestamps and reserving
the data, as when that happens, it just tries again instead of dealing
with it.

I understand that the reason why you need the before/after stamps and their
associated complexity is because the Ftrace ring buffer ABI encodes event
timestamps as delta from the previous event within the buffer as a mean of
compressing the timestamp fields. If the delta cannot be represented in a
given number of bits, then it inserts a 64-bit timestamp (not sure if that
one is absolute or a delta from previous event).

There's both. An extended timestamp, which is added when the delta is too
big, and that too is just a delta from the previous event. And there is the
absolute timestamp as well. I could always just use the absolute one. That
event came much later.

OK



This timestamp encoding as delta between events introduce a strong
inter-dependency between consecutive (nested) events, and is the reason
why you are stuck with all this timestamp before/after complexity.

The Common Trace Format specifies (and LTTng implements) a different way
to achieve the same ring buffer space-savings achieved with timestamp deltas
while keeping the timestamps semantically absolute from a given reference,
hence without all the before/after timestamp complexity. You can see the
clock value decoding procedure in the CTF2 SPEC RC9 [1] document. The basic

That points to this:

---------------------8<-------------------------
6.3. Clock value update procedure
To update DEF_CLK_VAL from an unsigned integer field F having the unsigned integer value V and the class C:

Let L be an unsigned integer initialized to, depending on the type property of C:

"fixed-length-unsigned-integer"
The value of the length property of C.

"variable-length-unsigned-integer"
S ×7, where S is the number of bytes which F occupies with the data stream.

Let MASK be an unsigned integer initialized to 2L − 1.

Let H be an unsigned integer initialized to DEF_CLK_VAL & ~MASK, where “&” is the bitwise AND operator and “~” is the bitwise NOT operator.

Let CUR be an unsigned integer initialized to DEF_CLK_VAL & MASK, where “&” is the bitwise AND operator.

Set DEF_CLK_VAL to:

If V ≥ CUR
H + V

Else
H + MASK + 1 + V
--------------------->8-------------------------

There's a lot of missing context there, so I don't see how it relates.

This explains how the "current time" is reconstructed by a trace reader
when loading an event header timestamp field. But for the sake of this
discussion we can focus on the less formal explanation of how the tracer
generates this timestamp encoding provided below.

idea on the producer side is to record the low-order bits of the current
timestamp in the event header (truncating the higher order bits), and
fall back on a full 64-bit value if the number of low-order bits overflows
from the previous timestamp is more than 1, or if it is impossible to figure
out precisely the timestamp of the previous event due to a race. This achieves
the same space savings as delta timestamp encoding without introducing the
strong event inter-dependency.

So when an overflow happens, you just insert a timestamp, and then events
after that is based on that?

No. Let's use an example to show how it works.

For reference, LTTng uses 5-bit for event ID and 27-bit for timestamps
in the compact event header representation. But for the sake of making this
discussion easier, let's assume a tracer would use 16-bit for timestamps in the
compact representation.

Let's say we have the following ktime_get() values (monotonic timestamp value) for
a sequence of events:

Timestamp (Hex) Encoding in the trace

Packet header timestamp begin 0x12345678 64-bit: 0x12345678

Event 1 0x12345678 16-bit: 0x5678
(When decoded, same value as previous timestamp, no overflow)
Event 2 0x12347777 16-bit: 0x7777
(When decoded, going from "0x5678" to "0x7777" does not overflow 16-bit)
Event 3 0x12350000 16-bit: 0x0000
(When decoded, going from "0x7777" to "0x0000" overflow 16-bit exactly once
which allows the trace reader to reconstruct timestamp 0x12350000 from the
previous timestamp and the 16-bit timestamp encoding.)
Event 4 0x12370000 64-bit: 0x12370000
(Encoding over 16-bit not possible because going from 0x12350000 to
0x12370000 would overflow 16-bit twice, which cannot be detected
by a trace reader. Therefore use the full 64-bit timestamp in the
"large" event header representation.)



The fact that Ftrace exposes this ring buffer binary layout as a user-space
ABI makes it tricky to move to the Common Trace Format timestamp encoding.
There are clearly huge simplifications that could be made by moving to this
scheme though. Is there any way to introduce a different timestamp encoding
scheme as an extension to the Ftrace ring buffer ABI ? This would allow us to
introduce this simpler scheme and gradually phase out the more complex delta
encoding when no users are left.

I'm not sure if there's a path forward. The infrastructure can easily swap
in and out a new implementation. That is, there's not much dependency on
the way the ring buffer works outside the ring buffer itself.

If we were to change the layout, it would likely require a new interface
file to read. The trace_pipe_raw is the only file that exposes the current
ring buffer. We could create a trace_out_raw or some other named file that
has a completely different API and it wouldn't break any existing API.

Or introduce "trace_pipe_raw2" or some kind of versioned file names as new
ABIs.

Although, if we want to change the "default" way, it may need some other
knobs or something, which wouldn't be hard.

The delta-timestamp-encoding would have to stay there for a while as long
as users have not switched over to trace_pipe_raw2. Then when it's really
gone, the trace_pipe_raw could either go away or return an error when
opened.

Now I have to ask, what's the motivation for this. The code isn't that
complex anymore. Yes it still has the before/after timestamps, but the
most complexity in that code was due to what happens in the race of
updating the reserved data. But that's no longer the case with the
cmpxchg(). If you look at this patch, that entire else statement was
deleted. And that deleted code was what made me sick to my stomach ;-)
Good riddance!

The motivation for this would be to further simplify the implementation
of __rb_reserve_next(), rb_add_timestamp(), and rb_try_to_discard(), and
remove a few useless loads, stores, and conditional branches on the fast-path
of __rb_reserve_next(). This would turn the before/after timestamp
stores/loads/comparisons into a simple "last timestamp" field and a comparison
of the current timestamp against the "last timestamp" value to figure out
whether the compact representation can be used.

I don't know whether it's worth the trouble or not, it's really up to you. :)

Thanks,

Mathieu

--
Mathieu Desnoyers
EfficiOS Inc.
https://www.efficios.com