[PATCH] drivers/iommu: Ensure that the queue base address is successfully written during SMMU initialization.

From: ni.liqiang
Date: Sun Feb 18 2024 - 00:02:44 EST


In the system reboot test, I encountered an issue:
After the OS started, the base address of CMDQ failed to be written
successfully and remained at the default value of 0.

Through timing analysis of CMN, it was found that although
the write request for the CMDQ base precedes the write request
for CMDQEN, the write response for the CMDQ base might be later
than that for CMDQEN.

Upon reviewing the SMMU Architecture Specification,
I found the following explanation:
The registers must be initialized in this order:
1. Write SMMU_CMDQ_BASE to set the queue base and size.
2. Write initial values to SMMU_CMDQ_CONS and SMMU_CMDQ_PROD.
3. Enable the queue with an Update of the respective SMMU_CR0.CMDQEN to 1.

If there are no memory barriers, how can we ensure this order?
Therefore, I believe that adding a memory barrier before enabling CMDQ
is necessary to ensure that the base address of CMDQ is correctly written.

The base addresses of EVENTQ and PRIQ would also be subject
to the same situation.

Could you please review if this modification seems reasonable? Thank you.

Signed-off-by: ni.liqiang <niliqiang.io@xxxxxxxxx>
Reviewed-by: jin.qi <jin.qi@xxxxxxxxxx>
Tested-by: ni.liqiang <niliqiang.io@xxxxxxxxx>
---
drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c | 15 +++++++++++++++
1 file changed, 15 insertions(+)

diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
index 0ffb1cf17e0b..ac854c46fdf3 100644
--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
@@ -3324,6 +3324,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
writel_relaxed(smmu->cmdq.q.llq.prod, smmu->base + ARM_SMMU_CMDQ_PROD);
writel_relaxed(smmu->cmdq.q.llq.cons, smmu->base + ARM_SMMU_CMDQ_CONS);

+ /* Ensure that SMMU_CMDQ_BASE is written completely
+ * when SMMU_CR0.CMDQEN == 0.
+ */
+ __iomb();
+
enables = CR0_CMDQEN;
ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
ARM_SMMU_CR0ACK);
@@ -3350,6 +3355,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
writel_relaxed(smmu->evtq.q.llq.prod, smmu->page1 + ARM_SMMU_EVTQ_PROD);
writel_relaxed(smmu->evtq.q.llq.cons, smmu->page1 + ARM_SMMU_EVTQ_CONS);

+ /* Ensure that SMMU_EVENTQ_BASE is written completely
+ * when SMMU_CR0.EVENTQEN == 0.
+ */
+ __iomb();
+
enables |= CR0_EVTQEN;
ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
ARM_SMMU_CR0ACK);
@@ -3367,6 +3377,11 @@ static int arm_smmu_device_reset(struct arm_smmu_device *smmu, bool bypass)
writel_relaxed(smmu->priq.q.llq.cons,
smmu->page1 + ARM_SMMU_PRIQ_CONS);

+ /* Ensure that SMMU_PRIQ_BASE is written completely
+ * when SMMU_CR0.PRIQEN == 0.
+ */
+ __iomb();
+
enables |= CR0_PRIQEN;
ret = arm_smmu_write_reg_sync(smmu, enables, ARM_SMMU_CR0,
ARM_SMMU_CR0ACK);
--
2.34.1