Re: [PATCH v4] Documentation/gpu: VM_BIND locking document

From: Danilo Krummrich
Date: Thu Nov 16 2023 - 09:02:45 EST


On 11/16/23 12:48, Thomas Hellström wrote:

<snip>

+Locks used and locking orders
+=============================
+
+One of the benefits of VM_BIND is that local GEM objects share the
gpu_vm's
+dma_resv object and hence the dma_resv lock. So even with a huge
+number of local GEM objects, only one lock is needed to make the
exec
+sequence atomic.
+
+The following locks and locking orders are used:
+
+* The ``gpu_vm->lock`` (optionally an rwsem). Protects how the
gpu_vm is
+  partitioned into gpu_vmas. It can also protect the gpu_vm's list
of
+  userptr gpu_vmas. With a CPU mm analogy this would correspond to
the
+  mmap_lock.

I don't see any drm_gpuvm::lock field in Danilo's latest patchset,
so,
unless I missed one version, and this lock is actually provided by
drm_gpuvm, I would mention this is a driver-specific lock. This
comment
applies to all the locks you describe here actually (mention which
ones
are provided by drm_gpuvm, and which ones are driver-specific).

These will be needed also by gpuvm when implementing userptr vmas, so I
can mention that drm_gpuvm is currently lacking a userptr
implementation, so "the locks described below are to be considered
driver-specific for now"

Since Xe already implements userptr support, are you guys maybe interested
in extending drm_gpuvm accordingly? :-)