Re: [PATCH] drm/msm/gem: Drop early returns in close/purge vma

From: Rob Clark
Date: Wed Jun 15 2022 - 10:59:44 EST


On Sat, Jun 11, 2022 at 11:16 AM Steev Klimaszewski <steev@xxxxxxxx> wrote:
>
> Hi Rob,
>
> On 6/10/22 12:20 PM, Rob Clark wrote:
> > From: Rob Clark <robdclark@xxxxxxxxxxxx>
> >
> > Keep the warn, but drop the early return. If we do manage to hit this
> > sort of issue, skipping the cleanup just makes things worse (dangling
> > drm_mm_nodes when the msm_gem_vma is freed, etc). Whereas the worst
> > that happens if we tear down a mapping the GPU is accessing is that we
> > get GPU iova faults, but otherwise the world keeps spinning.
> >

forgot this initially:

Reported-by: Steev Klimaszewski <steev@xxxxxxxx>

> > Signed-off-by: Rob Clark <robdclark@xxxxxxxxxxxx>
> > ---
> > drivers/gpu/drm/msm/msm_gem_vma.c | 6 ++----
> > 1 file changed, 2 insertions(+), 4 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/msm/msm_gem_vma.c b/drivers/gpu/drm/msm/msm_gem_vma.c
> > index 3c1dc9241831..c471aebcdbab 100644
> > --- a/drivers/gpu/drm/msm/msm_gem_vma.c
> > +++ b/drivers/gpu/drm/msm/msm_gem_vma.c
> > @@ -62,8 +62,7 @@ void msm_gem_purge_vma(struct msm_gem_address_space *aspace,
> > unsigned size = vma->node.size;
> >
> > /* Print a message if we try to purge a vma in use */
> > - if (GEM_WARN_ON(msm_gem_vma_inuse(vma)))
> > - return;
> > + GEM_WARN_ON(msm_gem_vma_inuse(vma));
> >
> > /* Don't do anything if the memory isn't mapped */
> > if (!vma->mapped)
> > @@ -128,8 +127,7 @@ msm_gem_map_vma(struct msm_gem_address_space *aspace,
> > void msm_gem_close_vma(struct msm_gem_address_space *aspace,
> > struct msm_gem_vma *vma)
> > {
> > - if (GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped))
> > - return;
> > + GEM_WARN_ON(msm_gem_vma_inuse(vma) || vma->mapped);
> >
> > spin_lock(&aspace->lock);
> > if (vma->iova)
>
> I've seen the splat on the Lenovo Yoga C630 here, and have tested this
> patch, and as described, the splat still happens, but the system is
> still able to be used.
>
> Tested-by: Steev Klimaszewski <steev@xxxxxxxx>
>