Re: [PATCH v3] udmabuf: add vmap and vunmap methods to udmabuf_ops

From: Lukasz Wiecaszek
Date: Wed Nov 16 2022 - 15:48:22 EST


On Wed, Nov 16, 2022 at 01:01:46PM +0100, Christian König wrote:
> Am 15.11.22 um 21:04 schrieb Lukasz Wiecaszek:
> > The reason behind that patch is associated with videobuf2 subsystem
> > (or more genrally with v4l2 framework) and user created
> > dma buffers (udmabuf). In some circumstances
> > when dealing with V4L2_MEMORY_DMABUF buffers videobuf2 subsystem
> > wants to use dma_buf_vmap() method on the attached dma buffer.
> > As udmabuf does not have .vmap operation implemented,
> > such dma_buf_vmap() natually fails.
> >
> > videobuf2_common: __vb2_queue_alloc: allocated 3 buffers, 1 plane(s) each
> > videobuf2_common: __prepare_dmabuf: buffer for plane 0 changed
> > videobuf2_common: __prepare_dmabuf: failed to map dmabuf for plane 0
> > videobuf2_common: __buf_prepare: buffer preparation failed: -14
> >
> > The patch itself seems to be strighforward.
> > It adds implementation of .vmap and .vunmap methods
> > to 'struct dma_buf_ops udmabuf_ops'.
> > .vmap method itself uses vm_map_ram() to map pages linearly
> > into the kernel virtual address space.
> > .vunmap removes mapping created earlier by .vmap.
> > All locking and 'vmapping counting' is done in dma_buf.c
> > so it seems to be redundant/unnecessary in .vmap/.vunmap.
> >
> > Signed-off-by: Lukasz Wiecaszek <lukasz.wiecaszek@xxxxxxxxx>
>
> > Reported-by: kernel test robot <lkp@xxxxxxxxx>
>
> Please drop this line, the kernel test robot should only be mentioned if the
> original report came from it.
>
> And keep in mind that it might be necessary to implement begin/end cpu
> access callbacks as well.
>
> Apart from that the patch is Acked-by: Christian König
> <christian.koenig@xxxxxxx>.
>
> Regards,
> Christian.

Thanks for that lesson with the 'kernel test robot' line.
The second issue with begin/end cpu access callbacks is more complicated
to me. My understaning is that memory allocated for udambuf will be the
memory obtained most likely (if not always) by memfd_create().
So this will be the anonymous system memory which is 'by definition'
coherent for cpu access. So no need for begin/end callbacks.
But if I miss something, plese let me/us know.

>
> > ---
> > v1: https://lore.kernel.org/linux-media/202211120352.G7WPASoP-lkp@xxxxxxxxx/T/#t
> > v2: https://lore.kernel.org/linux-media/20221114052944.GA7264@thinkpad-p72/T/#t
> >
> > v2 -> v3: Added .vunmap to 'struct dma_buf_ops udmabuf_ops'
> > v1 -> v2: Patch prepared and tested against 6.1.0-rc2+
> >
> > drivers/dma-buf/udmabuf.c | 28 ++++++++++++++++++++++++++++
> > 1 file changed, 28 insertions(+)
> >
> > diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
> > index 283816fbd72f..740d6e426ee9 100644
> > --- a/drivers/dma-buf/udmabuf.c
> > +++ b/drivers/dma-buf/udmabuf.c
> > @@ -13,6 +13,8 @@
> > #include <linux/slab.h>
> > #include <linux/udmabuf.h>
> > #include <linux/hugetlb.h>
> > +#include <linux/vmalloc.h>
> > +#include <linux/iosys-map.h>
> > static int list_limit = 1024;
> > module_param(list_limit, int, 0644);
> > @@ -60,6 +62,30 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
> > return 0;
> > }
> > +static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
> > +{
> > + struct udmabuf *ubuf = buf->priv;
> > + void *vaddr;
> > +
> > + dma_resv_assert_held(buf->resv);
> > +
> > + vaddr = vm_map_ram(ubuf->pages, ubuf->pagecount, -1);
> > + if (!vaddr)
> > + return -EINVAL;
> > +
> > + iosys_map_set_vaddr(map, vaddr);
> > + return 0;
> > +}
> > +
> > +static void vunmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
> > +{
> > + struct udmabuf *ubuf = buf->priv;
> > +
> > + dma_resv_assert_held(buf->resv);
> > +
> > + vm_unmap_ram(map->vaddr, ubuf->pagecount);
> > +}
> > +
> > static struct sg_table *get_sg_table(struct device *dev, struct dma_buf *buf,
> > enum dma_data_direction direction)
> > {
> > @@ -162,6 +188,8 @@ static const struct dma_buf_ops udmabuf_ops = {
> > .unmap_dma_buf = unmap_udmabuf,
> > .release = release_udmabuf,
> > .mmap = mmap_udmabuf,
> > + .vmap = vmap_udmabuf,
> > + .vunmap = vunmap_udmabuf,
> > .begin_cpu_access = begin_cpu_udmabuf,
> > .end_cpu_access = end_cpu_udmabuf,
> > };
>