Re: [PATCH v14 02/12] drm/shmem-helper: Add pages_pin_count field

From: Boris Brezillon
Date: Tue Jul 25 2023 - 04:32:43 EST


On Tue, 25 Jul 2023 09:27:09 +0200
Boris Brezillon <boris.brezillon@xxxxxxxxxxxxx> wrote:

> On Sun, 23 Jul 2023 02:47:36 +0300
> Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx> wrote:
>
> > And new pages_pin_count field to struct drm_gem_shmem_object that will
> > determine whether pages are evictable by memory shrinker. The pages will
> > be evictable only when pages_pin_count=0. This patch prepares code for
> > addition of the memory shrinker that will utilize the new field.
> >
> > Signed-off-by: Dmitry Osipenko <dmitry.osipenko@xxxxxxxxxxxxx>
> > ---
> > drivers/gpu/drm/drm_gem_shmem_helper.c | 9 +++++++++
> > include/drm/drm_gem_shmem_helper.h | 9 +++++++++
> > 2 files changed, 18 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > index 267153853e2c..42ba201dda50 100644
> > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c
> > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
> > @@ -274,15 +274,24 @@ static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
> > dma_resv_assert_held(shmem->base.resv);
> >
> > ret = drm_gem_shmem_get_pages(shmem);
> > + if (!ret)
> > + shmem->pages_pin_count++;
> >
> > return ret;
> > }
> >
> > static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
> > {
> > + struct drm_gem_object *obj = &shmem->base;
> > +
> > dma_resv_assert_held(shmem->base.resv);
> >
> > + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_pin_count))
> > + return;
> > +
> > drm_gem_shmem_put_pages(shmem);
> > +
> > + shmem->pages_pin_count--;
> > }
> >
> > /**
> > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h
> > index bf0c31aa8fbe..7111f5743006 100644
> > --- a/include/drm/drm_gem_shmem_helper.h
> > +++ b/include/drm/drm_gem_shmem_helper.h
> > @@ -39,6 +39,15 @@ struct drm_gem_shmem_object {
> > */
> > unsigned int pages_use_count;
> >
> > + /**
> > + * @pages_pin_count:
> > + *
> > + * Reference count on the pinned pages table.
> > + * The pages allowed to be evicted by memory shrinker
> > + * only when the count is zero.
> > + */
> > + unsigned int pages_pin_count;
>
> Can we make it an atomic_t, so we can avoid taking the lock when the
> GEM has already been pinned. That's something I need to be able to grab
> a pin-ref in a path where the GEM resv lock is already held[1]. We could
> of course expose the locked version,

My bad, that's actually not true. The problem is not that I call
drm_gem_shmem_pin() with the resv lock already held, but that I call
drm_gem_shmem_pin() in a dma-signaling path where I'm not allowed to
take a resv lock. I know for sure pin_count > 0, because all GEM objects
mapped to a VM have their memory pinned right now, and this should
stand until we decide to add support for live-GEM eviction, at which
point we'll probably have a way to detect when a GEM is evicted, and
avoid calling drm_gem_shmem_pin() on it.

TLDR; I can't trade the atomic_t for a drm_gem_shmem_pin_locked(),
because that wouldn't solve my problem. The other solution would be to
add an atomic_t at the driver-GEM level, and only call
drm_gem_shmem_[un]pin() on 0 <-> 1 transitions, but I thought using an
atomic at the GEM-shmem level, to avoid locking when we can, would be
beneficial to the rest of the eco-system. Let me know if that's not an
option, and I'll go back to the driver-specific atomic_t.

> but in my case, I want to enforce
> the fact the GEM has been pinned before the drm_gem_shmem_pin() call in
> the section protected by the resv lock, so catching a "refcount 0 -> 1"
> situation would be useful. Beside, using an atomic to avoid the
> lock/unlock dance when refcount > 1 might be beneficial to everyone.
>
> [1]https://gitlab.freedesktop.org/bbrezillon/linux/-/commit/4420fa0d5768ebdc35b34d58d4ae5fad9fbb93f9
>
> > +
> > /**
> > * @madv: State for madvise
> > *
>