Re: [RFC] dma-buf: Implement test module

From: Thomas Hellstrom
Date: Sat Dec 14 2013 - 08:11:12 EST


On 12/14/2013 02:02 PM, Rob Clark wrote:
On Sat, Dec 14, 2013 at 7:47 AM, Thomas Hellstrom <thomas@xxxxxxxxxxxx> wrote:
On 12/14/2013 01:37 PM, Thierry Reding wrote:
On Thu, Dec 12, 2013 at 11:30:23PM +0100, Daniel Vetter wrote:
On Thu, Dec 12, 2013 at 8:34 PM, Thomas Hellstrom <thellstrom@xxxxxxxxxx>
wrote:
On 12/12/2013 03:36 PM, Thierry Reding wrote:
This is a simple test module that can be used to allocate, export and
delete DMA-BUF objects. It can be used to test DMA-BUF sharing in
systems that lack a real second driver.


Looks nice. I wonder whether this could be extended to create a
"streaming"
dma-buf from a user space mapping. That could be used as a generic way
to
implement streaming (user) buffer objects, rather than to add explicit
support for those in, for example, TTM.
Atm there's no way to get gpus to unbind their dma-buf mappings, so
their essentially pinned forever from first use on.
Shouldn't this work by simply calling the GEM_CLOSE IOCTL on the handle
returned by drmPrimeFDToHandle()? I mean that should drop the last
reference on the GEM object and cause it to be cleaned up (which should
include detaching the DMA-BUF).

Actually, while the GEM prime implementation appears to pin an exported
dma-buf on first attach, from the dma-buf documentation it seems sufficient
to pin it on map or cpu access.

But what I assume Daniel is referring to is that there is no way for
exporters to tell importers to force unmap() the dma-buf, so that it can be
unpinned?
yeah, or some way for importers to opportunistically keep around a
mapping rather than map/unmap on each use..

maybe we need something shrinker-ish for dmabuf?

Yes, I think that's needed for both the memory-shortage case where we want to unpin, and I guess it would desirable for
iommu space management as well.

/Thomas


BR,
-R


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/