Re: [PATCH v2 01/21] docs/memory-barriers.txt: Rewrite "KERNEL I/O BARRIER EFFECTS" section

From: Will Deacon
Date: Tue Apr 16 2019 - 05:14:05 EST


Hi Ben,

On Mon, Apr 15, 2019 at 02:05:30PM +1000, Benjamin Herrenschmidt wrote:
> On Fri, 2019-04-12 at 14:17 +0100, Will Deacon wrote:
> >
> > + the same CPU thread to a particular device will arrive in program
> > + order.
> > +
> > + 2. A writeX() by a CPU thread to the peripheral will first wait for the
> > + completion of all prior writes to memory either issued by the thread
> > + or issued while holding a spinlock that was subsequently taken by the
> > + thread. This ensures that writes by the CPU to an outbound DMA
> > + buffer allocated by dma_alloc_coherent() will be visible to a DMA
> > + engine when the CPU writes to its MMIO control register to trigger
> > + the transfer.
>
> Not particularily trying to be annoying here but I find the above
> rather hard to parse :) I know what you're getting at but I'm not sure
> somebody who doesn't will understand.
>
> One way would be to instead prefix the whole thing with a blurb along
> the lines of:
>
> readX() and writeX() provide some ordering guarantees versus
> each other and other memory accesses that are described below.
> Those guarantees apply to accesses performed either by the same
> logical thread of execution, or by different threads but while
> holding the same lock (spinlock or mutex).
>
> Then have as simpler description of each case. No ?

Argh, I think we've ended up confusing two different things in our edits:

1. Ordering of readX()/writeX() between threads
2. Ordering of memory accesses in one thread vs readX()/writeX() in another

and these are very different beasts.

For (1), with my mmiowb() patches we can provide some guarantees for
writeX() in conjunction with spinlocks. I'm not convinced we can provide
these same guarantees for combinations involving readX(). For example:

CPU 1:
val1 = readl(dev_base + REG1);
flag = 1;
spin_unlock(&dev_lock);

CPU 2:
spin_lock(&dev_lock);
if (flag == 1)
val2 = readl(dev_base + REG2);

In the case that CPU 2 sees the updated flag, do we require that CPU 1's readl()
reads from the device first? I'm not sure that RISC-V's implementation ensures
that readl() is ordered with a subsequent spin_unlock().

For (2), we would need to make this part of LKMM if we wanted to capture
the precise semantics here (e.g. by using the 'prop' relation to figure out
which writes are ordered by a writel). This is a pretty significant piece of
work, so perhaps just referring informally to propagation would be better for
the English text.

Updated diff below.

Will

--->8

diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 1660dde75e14..bc4c6a76c53a 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -2524,26 +2524,36 @@ guarantees:

1. All readX() and writeX() accesses to the same peripheral are ordered
with respect to each other. This ensures that MMIO register writes by
- the CPU to a particular device will arrive in program order.
-
- 2. A writeX() by the CPU to the peripheral will first wait for the
- completion of all prior CPU writes to memory. This ensures that
- writes by the CPU to an outbound DMA buffer allocated by
- dma_alloc_coherent() will be visible to a DMA engine when the CPU
- writes to its MMIO control register to trigger the transfer.
-
- 3. A readX() by the CPU from the peripheral will complete before any
- subsequent CPU reads from memory can begin. This ensures that reads
- by the CPU from an incoming DMA buffer allocated by
- dma_alloc_coherent() will not see stale data after reading from the
- DMA engine's MMIO status register to establish that the DMA transfer
- has completed.
-
- 4. A readX() by the CPU from the peripheral will complete before any
- subsequent delay() loop can begin execution. This ensures that two
- MMIO register writes by the CPU to a peripheral will arrive at least
- 1us apart if the first write is immediately read back with readX()
- and udelay(1) is called prior to the second writeX():
+ the same CPU thread to a particular device will arrive in program
+ order.
+
+ 2. A writeX() issued by a CPU thread holding a spinlock is ordered
+ before a writeX() to the same peripheral from another CPU thread
+ issued after a later acquisition of the same spinlock. This ensures
+ that MMIO register writes to a particular device issued while holding
+ a spinlock will arrive in an order consistent with acquisitions of
+ the lock.
+
+ 3. A writeX() by a CPU thread to the peripheral will first wait for the
+ completion of all prior writes to memory either issued by, or
+ propagated to, the same thread. This ensures that writes by the CPU
+ to an outbound DMA buffer allocated by dma_alloc_coherent() will be
+ visible to a DMA engine when the CPU writes to its MMIO control
+ register to trigger the transfer.
+
+ 4. A readX() by a CPU thread from the peripheral will complete before
+ any subsequent reads from memory by the same thread can begin. This
+ ensures that reads by the CPU from an incoming DMA buffer allocated
+ by dma_alloc_coherent() will not see stale data after reading from
+ the DMA engine's MMIO status register to establish that the DMA
+ transfer has completed.
+
+ 5. A readX() by a CPU thread from the peripheral will complete before
+ any subsequent delay() loop can begin execution on the same thread.
+ This ensures that two MMIO register writes by the CPU to a peripheral
+ will arrive at least 1us apart if the first write is immediately read
+ back with readX() and udelay(1) is called prior to the second
+ writeX():

writel(42, DEVICE_REGISTER_0); // Arrives at the device...
readl(DEVICE_REGISTER_0);
@@ -2559,10 +2569,11 @@ guarantees:

These are similar to readX() and writeX(), but provide weaker memory
ordering guarantees. Specifically, they do not guarantee ordering with
- respect to normal memory accesses or delay() loops (i.e. bullets 2-4
- above) but they are still guaranteed to be ordered with respect to other
- accesses to the same peripheral when operating on __iomem pointers
- mapped with the default I/O attributes.
+ respect to locking, normal memory accesses or delay() loops (i.e.
+ bullets 2-5 above) but they are still guaranteed to be ordered with
+ respect to other accesses from the same CPU thread to the same
+ peripheral when operating on __iomem pointers mapped with the default
+ I/O attributes.

(*) readsX(), writesX():

@@ -2600,8 +2611,10 @@ guarantees:
These will perform appropriately for the type of access they're actually
doing, be it inX()/outX() or readX()/writeX().

-All of these accessors assume that the underlying peripheral is little-endian,
-and will therefore perform byte-swapping operations on big-endian architectures.
+With the exception of the string accessors (insX(), outsX(), readsX() and
+writesX()), all of the above assume that the underlying peripheral is
+little-endian and will therefore perform byte-swapping operations on big-endian
+architectures.


========================================