Re: DPAA2 triggers, [PATCH] dma debug: report -EEXIST errors in add_dma_entry

From: Gerald Schaefer
Date: Wed Oct 06 2021 - 11:06:57 EST


On Wed, 6 Oct 2021 15:23:36 +0100
Robin Murphy <robin.murphy@xxxxxxx> wrote:

> On 2021-10-06 14:10, Gerald Schaefer wrote:
> > On Fri, 1 Oct 2021 14:52:56 +0200
> > Gerald Schaefer <gerald.schaefer@xxxxxxxxxxxxx> wrote:
> >
> >> On Thu, 30 Sep 2021 15:37:33 +0200
> >> Karsten Graul <kgraul@xxxxxxxxxxxxx> wrote:
> >>
> >>> On 14/09/2021 17:45, Ioana Ciornei wrote:
> >>>> On Wed, Sep 08, 2021 at 10:33:26PM -0500, Jeremy Linton wrote:
> >>>>> +DPAA2, netdev maintainers
> >>>>> Hi,
> >>>>>
> >>>>> On 5/18/21 7:54 AM, Hamza Mahfooz wrote:
> >>>>>> Since, overlapping mappings are not supported by the DMA API we should
> >>>>>> report an error if active_cacheline_insert returns -EEXIST.
> >>>>>
> >>>>> It seems this patch found a victim. I was trying to run iperf3 on a
> >>>>> honeycomb (5.14.0, fedora 35) and the console is blasting this error message
> >>>>> at 100% cpu. So, I changed it to a WARN_ONCE() to get the call trace, which
> >>>>> is attached below.
> >>>>>
> >>>>
> >>>> These frags are allocated by the stack, transformed into a scatterlist
> >>>> by skb_to_sgvec and then DMA mapped with dma_map_sg. It was not the
> >>>> dpaa2-eth's decision to use two fragments from the same page (that will
> >>>> also end un in the same cacheline) in two different in-flight skbs.
> >>>>
> >>>> Is this behavior normal?
> >>>>
> >>>
> >>> We see the same problem here and it started with 5.15-rc2 in our nightly CI runs.
> >>> The CI has panic_on_warn enabled so we see the panic every day now.
> >>
> >> Adding a WARN for a case that be detected false-positive seems not
> >> acceptable, exactly for this reason (kernel panic on unaffected
> >> systems).
> >>
> >> So I guess it boils down to the question if the behavior that Ioana
> >> described is legit behavior, on a system that is dma coherent. We
> >> are apparently hitting the same scenario, although it could not yet be
> >> reproduced with debug printks for some reason.
> >>
> >> If the answer is yes, than please remove at lease the WARN, so that
> >> it will not make systems crash that behave valid, and have
> >> panic_on_warn set. Even a normal printk feels wrong to me in that
> >> case, it really sounds rather like you want to fix / better refine
> >> the overlap check, if you want to report anything here.
> >
> > Dan, Christoph, any opinion?
> >
> > So far it all looks a lot like a false positive, so could you please
> > see that those patches get reverted? I do wonder a bit why this is
> > not an issue for others, we surely cannot be the only ones running
> > CI with panic_on_warn.
>
> What convinces you it's a false-positive? I'm hardly familiar with most
> of that callstack, but it appears to be related to mlx5, and I know that
> exists on expansion cards which could be plugged into a system with
> non-coherent PCIe where partial cacheline overlap *would* be a real
> issue. Of course it's dubious that there are many real use-cases for
> plugging a NIC with a 4-figure price tag into a little i.MX8 or
> whatever, but the point is that it *should* still work correctly.

I would assume that a *proper* warning would check if we see the
"non-coherent" case, e.g. by using dev_is_dma_coherent() and only
report with potentially fatal WARN on systems where it is appropriate.

However, I am certainly even less familiar with all that, and might
just have gotten the wrong impression here.

Also not sure about mlx5 relation here, it does not really show
in the call trace, only in the err_printk() output, probably
from dev_driver_string(dev) or dev_name(dev). But I do not see
where mlx5 code would be involved here.

[...]
> According to the streaming DMA API documentation, it is *not* valid:
>
> ".. warning::
>
> Memory coherency operates at a granularity called the cache
> line width. In order for memory mapped by this API to operate
> correctly, the mapped region must begin exactly on a cache line
> boundary and end exactly on one (to prevent two separately mapped
> regions from sharing a single cache line). Since the cache line size
> may not be known at compile time, the API will not enforce this
> requirement. Therefore, it is recommended that driver writers who
> don't take special care to determine the cache line size at run time
> only map virtual regions that begin and end on page boundaries (which
> are guaranteed also to be cache line boundaries)."

Thanks, but I cannot really make a lot of sense out if this. Which
driver exactly would be the one that needs to take care of the
cache line alignment for sg elements? If this WARN is really reporting
a bug, could you please help pointing to where it would need to be
addressed?

And does this really say that it is illegal to have multiple sg elements
within the same cache line, regardless of cache coherence?

Adding linux-rdma@xxxxxxxxxxxxxxx, sorry for the noise, but maybe somebody
on that list can make more sense of this.

For reference, the link to the start of this thread:
https://lkml.kernel.org/r/fd67fbac-64bf-f0ea-01e1-5938ccfab9d0@xxxxxxx