Re: [RFC][PATCH 12/15] videobuf2: add begin/end cpu_access callbacks to dma-sg

From: Tomasz Figa
Date: Mon Jan 27 2020 - 23:39:07 EST


On Fri, Jan 10, 2020 at 7:13 PM Hans Verkuil <hverkuil@xxxxxxxxx> wrote:
>
> On 12/17/19 4:20 AM, Sergey Senozhatsky wrote:
> > Provide begin_cpu_access() and end_cpu_access() dma_buf_ops
> > callbacks for cache synchronisation on exported buffers.
> >
> > Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx>
> > ---
> > .../media/common/videobuf2/videobuf2-dma-sg.c | 22 +++++++++++++++++++
> > 1 file changed, 22 insertions(+)
> >
> > diff --git a/drivers/media/common/videobuf2/videobuf2-dma-sg.c b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> > index 6db60e9d5183..bfc99a0cb7b9 100644
> > --- a/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> > +++ b/drivers/media/common/videobuf2/videobuf2-dma-sg.c
> > @@ -470,6 +470,26 @@ static void vb2_dma_sg_dmabuf_ops_release(struct dma_buf *dbuf)
> > vb2_dma_sg_put(dbuf->priv);
> > }
> >
>
> There is no corresponding vb2_sg_buffer_consistent function here.
>
> Looking more closely I see that vb2_dma_sg_alloc doesn't pass the dma_attrs
> argument to dma_map_sg_attrs, thus V4L2_FLAG_MEMORY_NON_CONSISTENT has no
> effect on dma-sg buffers.

videobuf2-dma-sg allocates the memory using the page allocator
directly, which means that there is no memory consistency guarantee.

>
> Is there a reason why dma_attrs isn't passed on to dma_map_sg_attrs()?
>

V4L2_FLAG_MEMORY_NON_CONSISTENT is a flag for dma_alloc_attrs(). It
isn't supposed to do anything for dma_map_sg_attrs(), which is only
supposed to create the device (e.g. IOMMU) mapping for already
allocated memory.

> I suspect it was just laziness in the past, and that it should be wired
> up, just as for dma-contig.
>
> Regards,
>
> Hans
>
> > +static int vb2_dma_sg_dmabuf_ops_begin_cpu_access(struct dma_buf *dbuf,
> > + enum dma_data_direction direction)
> > +{
> > + struct vb2_dma_sg_buf *buf = dbuf->priv;
> > + struct sg_table *sgt = buf->dma_sgt;
> > +
> > + dma_sync_sg_for_cpu(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
> > + return 0;
> > +}
> > +
> > +static int vb2_dma_sg_dmabuf_ops_end_cpu_access(struct dma_buf *dbuf,
> > + enum dma_data_direction direction)
> > +{
> > + struct vb2_dma_sg_buf *buf = dbuf->priv;
> > + struct sg_table *sgt = buf->dma_sgt;
> > +
> > + dma_sync_sg_for_device(buf->dev, sgt->sgl, sgt->nents, buf->dma_dir);
> > + return 0;
> > +}
> > +
> > static void *vb2_dma_sg_dmabuf_ops_vmap(struct dma_buf *dbuf)
> > {
> > struct vb2_dma_sg_buf *buf = dbuf->priv;
> > @@ -488,6 +508,8 @@ static const struct dma_buf_ops vb2_dma_sg_dmabuf_ops = {
> > .detach = vb2_dma_sg_dmabuf_ops_detach,
> > .map_dma_buf = vb2_dma_sg_dmabuf_ops_map,
> > .unmap_dma_buf = vb2_dma_sg_dmabuf_ops_unmap,
> > + .begin_cpu_access = vb2_dma_sg_dmabuf_ops_begin_cpu_access,
> > + .end_cpu_access = vb2_dma_sg_dmabuf_ops_end_cpu_access,
> > .vmap = vb2_dma_sg_dmabuf_ops_vmap,
> > .mmap = vb2_dma_sg_dmabuf_ops_mmap,
> > .release = vb2_dma_sg_dmabuf_ops_release,
> >
>