Re: [PATCH v1 2/3] drm/msm/dp: decoupling dp->id out of dp controller_id at scxxxx_dp_cfg table

From: Kuogee Hsieh
Date: Fri Jun 24 2022 - 18:53:53 EST



On 6/24/2022 3:19 PM, Stephen Boyd wrote:
Quoting Kuogee Hsieh (2022-06-24 14:49:57)
On 6/24/2022 2:40 PM, Stephen Boyd wrote:
Quoting Kuogee Hsieh (2022-06-24 14:17:50)
On 6/24/2022 1:00 PM, Stephen Boyd wrote:
Quoting Kuogee Hsieh (2022-06-24 10:15:11)
Current the index (dp->id) of DP descriptor table (scxxxx_dp_cfg[]) are tightly
coupled with DP controller_id. This means DP use controller id 0 must be placed
at first entry of DP descriptor table (scxxxx_dp_cfg[]). Otherwise the internal
INTF will mismatch controller_id. This will cause controller kickoff wrong
interface timing engine and cause dpu_encoder_phys_vid_wait_for_commit_done
vblank timeout error.

This patch add controller_id field into struct msm_dp_desc to break the tightly
coupled relationship between index (dp->id) of DP descriptor table with DP
controller_id.
Please no. This reverts the intention of commit bb3de286d992
("drm/msm/dp: Support up to 3 DP controllers")

A new enum is introduced to document the connection between the
instances referenced in the dpu_intf_cfg array and the controllers in
the DP driver and sc7180 is updated.

It sounds like the intent of that commit failed to make a strong enough
connection. Now it needs to match the INTF number as well? I can't
really figure out what is actually wrong, because this patch undoes that
intentional tight coupling. Is the next patch the important part that
flips the order of the two interfaces?
The commit bb3de286d992have two problems,

1)  The below sc7280_dp_cfg will not work, if eDP use
MSM_DP_CONTROLLER_2 instead of  MSM_DP_CONTROLLER_1
Why would we use three indices for an soc that only has two indices
possible? This is not a real problem?
I do not what will happen at future, it may have more dp controller use
late.

at current soc, below table has only one eDP will not work either.

static const struct msm_dp_config sc7280_dp_cfg = {
        .descs = (const struct msm_dp_desc[]) {
                [MSM_DP_CONTROLLER_1] = { .io_start = 0x0aea0000,
.connector_type = DRM_MODE_CONNECTOR_eDP, .wide_bus_en = true },

        .num_descs = 1,
So the MSM_DP_CONTROLLER_* number needs to match what exactly?MSM

MSM_DP_CONTROLLER_1 need to match to the index = 1 of sc7280_dp_cfg[] <== This is correct

The problem is sc7280_dp_cfg[] have two entries since eDP place at index of MSM_DP_CONTROLLER_1.

but .num_desc = 1  <== this said only have one entry at sc7280_dp_cfg[] table. Therefore eDP will never be found at for loop  at _dpu_kms_initialize_displayport().



since it have num_descs =2 but eDP is at index 2 (CONTROLLER_2) which
never be reached.

static const struct msm_dp_config sc7280_dp_cfg = {
        .descs = (const struct msm_dp_desc[]) {
                [MSM_DP_CONTROLLER_2] = { .io_start = 0x0aea0000,
.connector_type = DRM_MODE_CONNECTOR_eDP, .wide_bus_en = true },
                [MSM_DP_CONTROLLER_0] = { .io_start = 0x0ae90000,
.connector_type = DRM_MODE_CONNECTOR_DisplayPort, .wide_bus_en = true },
        },
        .num_descs = 2,
};

2)  DP always has index of 0 (dp->id = 0) and the first one to call
msm_dp_modeset_init(). This make DP always place at head of bridge chain.
What does this mean? Are you talking about the list of bridges in drm
core, i.e. 'bridge_list'?
yes,
I changed the drm_bridge_add() API and that doesn't make any difference.
The corruption is still seen. That would imply it is not the order of
the list of bridges.

Sorry, my mistake. it is not in drm_bridge_add.

It should be in dpu_encoder_init() of _dpu_kms_initialize_displayport().

can you make below changes (patch) to _dpu_kms_initialize_displayport().

kuogee: go backward for dp modeset_init

diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
index 3a4da0d..b271a4b 100644
--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
+++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_kms.c
@@ -611,9 +611,15 @@ static int _dpu_kms_initialize_displayport(struct drm_device *dev,
        struct drm_encoder *encoder = NULL;
        struct msm_display_info info;
        int rc;
-       int i;
+       int i,num;
+
+       num = ARRAY_SIZE(priv->dp);

+#ifdef XXXX
        for (i = 0; i < ARRAY_SIZE(priv->dp); i++) {
+#else
+       for (i = num - 1; i >= 0 ; i--) {
+#endif
                if (!priv->dp[i])
                        continue;



---8<---
diff --git a/drivers/gpu/drm/drm_bridge.c b/drivers/gpu/drm/drm_bridge.c
index e275b4ca344b..e3518101b65e 100644
--- a/drivers/gpu/drm/drm_bridge.c
+++ b/drivers/gpu/drm/drm_bridge.c
@@ -165,7 +165,7 @@ void drm_bridge_add(struct drm_bridge *bridge)
mutex_init(&bridge->hpd_mutex);

mutex_lock(&bridge_lock);
- list_add_tail(&bridge->list, &bridge_list);
+ list_add(&bridge->list, &bridge_list);
mutex_unlock(&bridge_lock);
}
EXPORT_SYMBOL(drm_bridge_add);

At next patch eDP must be placed at head of bridge chain to fix eDP
corruption issue. This is the purpose of this patch. I will revise the
commit text.

Wouldn't that be "broken" again if we decided to change drm_bridge_add()
to add to the list head instead of list tail? Or if somehow
msm_dp_modeset_init() was called in a different order so that the DP
bridge was added before the eDP bridge?
we have no control of drm_bridge_add().

Since drm perform screen update following bridge chain sequentially, we
have to make sure primary always update first.