Re: data_ring head_lpos and tail_lpos synchronization: was [PATCH v2 2/3] printk: add lockless buffer

From: John Ogness
Date: Wed Jun 10 2020 - 03:53:24 EST


On 2020-06-09, Petr Mladek <pmladek@xxxxxxxx> wrote:
>> --- /dev/null
>> +++ b/kernel/printk/printk_ringbuffer.c
>> +/*
>> + * Advance the data ring tail to at least @lpos. This function puts
>> + * descriptors into the reusable state if the tail is pushed beyond
>> + * their associated data block.
>> + */
>> +static bool data_push_tail(struct printk_ringbuffer *rb,
>> + struct prb_data_ring *data_ring,
>> + unsigned long lpos)
>> +{
>> + unsigned long tail_lpos;
>> + unsigned long next_lpos;
>> +
>> + /* If @lpos is not valid, there is nothing to do. */
>> + if (lpos == INVALID_LPOS)
>> + return true;
>> +
>> + tail_lpos = atomic_long_read(&data_ring->tail_lpos);
>
> Hmm, I wonder whether data_ring->tail_lpos and data_ring->head_lpos
> are synchronized enough between each other.
>
> I feel that there should be read barrier here. But it seems that more
> barriers are missing. For example, let's have:
>
>
> CPU0 CPU1
>
> data_alloc()
> begin_lpos = atomic_read(data_ring->head_lpos);
>
> data_alloc()
> data_push_tail()
> cmpxchg(data_ring->tail_lpos);
> // A: no barrier
> cmpxchg(data_ring->head_lpos);
>
> data_push_tail()
> // B: no barrier
> tail_lpos = atomic_read(data_ring->tail_lpos);
>
> Problem 1:
>
> CPU0 might see random ordering of data_ring->tail_lpos and
> head_lpos values modified by CPU1. There are missing both
> write and read barriers.

You need to explain why this is a problem. CPU0 saw some head and some
tail. Both values are at least the current values (i.e. there is no
danger that it sees a tail that is further than the tail really is).

CPU0 then uses the head/tail values to determine how far to advance the
tail and how far to advance the head. Both of these advances use
cmpxchg_relaxed(). So there is no danger of random head/tail
modifications. Upon cmpxchg_relaxed() failure, the new current values
are loaded and it retries based on the new values.

The only issue is if data_push_tail()/data_make_reusable() are able to
recognize that a data area is already recycled. And both functions have
memory barriers in place for that.

> Problem 2:
>
> There might be still a chance because CPU0 does:
>
> if (!data_make_reusable())
> smp_rmb()
> tail_lpos = atomic_read(data_ring->tail_lpos);
>
> But CPU0 might still see old data_ring->tail because CPU1 did not
> do write barrier.

I claim that it does not matter. The smp_rmb() here pairs with the full
memory barrier LMM(desc_reserve:D). The reasoning:

* Guarantee any data ring tail changes are stored before
* recycling the descriptor. Data ring tail changes can happen
* via desc_push_tail()->data_push_tail(). A full memory
* barrier is needed since another task may have pushed the
* data ring tails. This pairs with data_push_tail:A.

So if data_make_reusable() failed due to a descriptor already being
recycled, we know CPU0 will be able to read an updated tail value (and
try again with the new value).

John Ogness