Re: [PATCH v5 3/3] PCI: qcom: Add retry logic for link to be stable in L1ss

From: Krishna Chaitanya Chundru
Date: Tue Aug 23 2022 - 23:41:32 EST



On 8/5/2022 3:03 AM, Matthias Kaehlcke wrote:
On Wed, Aug 03, 2022 at 04:58:54PM +0530, Krishna chaitanya chundru wrote:
Some specific devices are taking time to settle the link in L1ss.
So added a retry logic before returning from the suspend op.

Signed-off-by: Krishna chaitanya chundru <quic_krichai@xxxxxxxxxxx>
---
drivers/pci/controller/dwc/pcie-qcom.c | 25 ++++++++++++++++++++-----
1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
index f7dd5dc..f3201bd 100644
--- a/drivers/pci/controller/dwc/pcie-qcom.c
+++ b/drivers/pci/controller/dwc/pcie-qcom.c
@@ -1829,15 +1829,30 @@ static int __maybe_unused qcom_pcie_pm_suspend(struct device *dev)
{
struct qcom_pcie *pcie = dev_get_drvdata(dev);
u32 val;
+ ktime_t timeout, start;
if (!pcie->cfg->supports_system_suspend)
return 0;
- /* if the link is not in l1ss don't turn off clocks */
- val = readl(pcie->parf + PCIE20_PARF_PM_STTS);
- if (!(val & PCIE20_PARF_PM_STTS_LINKST_IN_L1SUB)) {
- dev_warn(dev, "Link is not in L1ss\n");
- return 0;
+ start = ktime_get();
+ /* Wait max 100 ms */
+ timeout = ktime_add_ms(start, 100);
In my tests 100 ms is ample margin for most NVMe models (it's often 0 and
generally < 10), however with one model I saw delays of up to 150 ms, so
this should probably be 200 ms or so (it's a long time, but most of the
time the actual delay is significantly lower
ok I will increase the time to 200.

+ while (1) {
+ bool timedout = ktime_after(ktime_get(), timeout);
'timedout' looks very similar to the other local variable 'timeout'
in this function. Actually why not just do without the new variable and
put this after reading the register.

if (ktime_after(ktime_get(), timeout)) {
dev_warn(dev, "Link is not in L1ss\n");
return 0;
}
ok sure will update in the next patch.
+
+ /* if the link is not in l1ss don't turn off clocks */
+ val = readl(pcie->parf + PCIE20_PARF_PM_STTS);
+ if ((val & PCIE20_PARF_PM_STTS_LINKST_IN_L1SUB)) {
+ dev_info(dev, "Link enters L1ss after %d ms\n",
+ ktime_to_ms(ktime_get() - start));

Probably this should be dev_dbg() to avoid cluttering the kernel log that
isn't relevant most of the time.
ok sure will update in next patch.

+ break;
+ }
+
+ if (timedout) {
+ dev_warn(dev, "Link is not in L1ss\n");
+ return 0;
+ }
+ usleep_range(1000, 1200);
You could use fsleep() instead of specifying a range.

Based on my testing I think a slightly higher delay like 5ms wouldn't hurt.
That would result in less 'busy looping' for slower NVMes and would still
be reasonable fast for those that need 10 ms or so.

Actually you could replace the entire loop with something like this:

if (readl_poll_timeout(pcie->parf + PCIE20_PARF_PM_STTS, val,
val & PCIE20_PARF_PM_STTS_LINKST_IN_L1SUB, 5000, 200000) {
dev_warn(dev, "Link is not in L1ss\n");
return 0;
}

Ok we will look in to this option and will update the patch if needed.