Re: [PATCH v1] drm/msm/dp: use dp_hpd_plug_handle() and dp_hpd_unplug_handle() directly

From: Bjorn Andersson
Date: Thu Mar 28 2024 - 21:47:21 EST


On Thu, Mar 28, 2024 at 02:21:14PM -0700, Abhinav Kumar wrote:
>
>
> On 3/28/2024 1:58 PM, Stephen Boyd wrote:
> > Quoting Abhinav Kumar (2024-03-28 13:24:34)
> > > + Johan and Bjorn for FYI
> > >
> > > On 3/28/2024 1:04 PM, Kuogee Hsieh wrote:
> > > > For internal HPD case, hpd_event_thread is created to handle HPD
> > > > interrupts generated by HPD block of DP controller. It converts
> > > > HPD interrupts into events and executed them under hpd_event_thread
> > > > context. For external HPD case, HPD events is delivered by way of
> > > > dp_bridge_hpd_notify() under thread context. Since they are executed
> > > > under thread context already, there is no reason to hand over those
> > > > events to hpd_event_thread. Hence dp_hpd_plug_handle() and
> > > > dp_hpd_unplug_hanlde() are called directly at dp_bridge_hpd_notify().
> > > >
> > > > Signed-off-by: Kuogee Hsieh <quic_khsieh@xxxxxxxxxxx>
> > > > ---
> > > > drivers/gpu/drm/msm/dp/dp_display.c | 5 +++--
> > > > 1 file changed, 3 insertions(+), 2 deletions(-)
> > > >
> > >
> > > Fixes: 542b37efc20e ("drm/msm/dp: Implement hpd_notify()")
> >
> > Is this a bug fix or an optimization? The commit text doesn't tell me.
> >
>
> I would say both.
>
> optimization as it avoids the need to go through the hpd_event thread
> processing.
>
> bug fix because once you go through the hpd event thread processing it
> exposes and often breaks the already fragile hpd handling state machine
> which can be avoided in this case.
>

It removes the main users of the thread, but there's still code paths
which will post events on the thread.

I think I like the direction this is taking, but does it really fix the
whole problem, or just patch one case?


PS. Please read go/upstream and switch to b4, to avoid some practical
issues with the way you posted this patch.

Thanks,
Bjorn

> > >
> > > Looks right to me,
> > >
> > > Reviewed-by: Abhinav Kumar <quic_abhinavk@xxxxxxxxxxx>