Re: [PATCH] mmc: cqhci: Be more verbose in error irq handler

From: Adrian Hunter
Date: Mon Oct 23 2023 - 07:38:20 EST


On 20/10/23 11:53, Kornel Dulęba wrote:
> On Fri, Oct 20, 2023 at 9:41 AM Adrian Hunter <adrian.hunter@xxxxxxxxx> wrote:
>>
>> On 16/10/23 12:56, Kornel Dulęba wrote:
>>> There are several reasons for controller to generate an error interrupt.
>>> They include controller<->card timeout, and CRC mismatch error.
>>> Right now we only get one line in the logs stating that CQE recovery was
>>> triggered, but with no information about what caused it.
>>> To figure out what happened be more verbose and dump the registers from
>>> irq error handler logic.
>>> This matches the behaviour of the software timeout logic, see
>>> cqhci_timeout.
>>>
>>> Signed-off-by: Kornel Dulęba <korneld@xxxxxxxxxxxx>
>>> ---
>>> drivers/mmc/host/cqhci-core.c | 5 +++--
>>> 1 file changed, 3 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/mmc/host/cqhci-core.c b/drivers/mmc/host/cqhci-core.c
>>> index b3d7d6d8d654..33abb4bd53b5 100644
>>> --- a/drivers/mmc/host/cqhci-core.c
>>> +++ b/drivers/mmc/host/cqhci-core.c
>>> @@ -700,8 +700,9 @@ static void cqhci_error_irq(struct mmc_host *mmc, u32 status, int cmd_error,
>>>
>>> terri = cqhci_readl(cq_host, CQHCI_TERRI);
>>>
>>> - pr_debug("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d TERRI: 0x%08x\n",
>>> - mmc_hostname(mmc), status, cmd_error, data_error, terri);
>>> + pr_warn("%s: cqhci: error IRQ status: 0x%08x cmd error %d data error %d\n",
>>> + mmc_hostname(mmc), status, cmd_error, data_error);
>>> + cqhci_dumpregs(cq_host);
>>
>> For debugging, isn't dynamic debug seems more appropriate?
>
> Dynamic debug is an option, but my personal preference would be to
> just log more info in the error handler.

Interrupt handlers can get called very rapidly, so some kind of rate
limiting should be used if the message is unconditional. Also you need
to provide actual reasons for your preference.

For dynamic debug of the register dump, something like below is
possible.

#define cqhci_dynamic_dumpregs(cqhost) \
_dynamic_func_call_no_desc("cqhci_dynamic_dumpregs", cqhci_dumpregs, cqhost)

> To give you some background.
> We're seeing some "running CQE recovery" lines in the logs, followed
> by a dm_verity mismatch error.
> The reports come from the field, with no feasible way to reproduce the
> issue locally.

If it is a software error, some kind of error injection may well
reproduce it. Also if it is a hardware error that only happens
during recovery, error injection could increase the likelihood of
reproducing it.

>
> I'd argue that logging only the info that CQE recovery was executed is
> not particularly helpful for someone looking into those logs.

As the comment says, that message is there because recovery reduces
performance, it is not to aid debugging per se.

> Ideally we would have more data about the state the controller was in
> when the error happened, or at least what caused the recovery to be
> triggered.
> The question here is how verbose should we be in this error scenario.
> Looking at other error scenarios, in the case of a software timeout
> we're dumping the controller registers. (cqhci_timeout)

Timeout means something is broken - either the driver, the cq engine
or the card. On the other hand, an error interrupt is most likely a
CRC error which is not unexpected occasionally, due to thermal drift
or perhaps interference.

> Hence I thought that I'd be appropriate to match that and do the same
> in CQE recovery logic.

It needs to be consistent. There are other pr_debugs, such as:

pr_debug("%s: cqhci: Failed to clear tasks\n",
pr_debug("%s: cqhci: Failed to halt\n", mmc_hostname(mmc));
pr_debug("%s: cqhci: disable / re-enable\n", mmc_hostname(mmc));

which should perhaps be treated the same.

And there are no messages for errors from the commands in
mmc_cqe_recovery().

>
>>
>>>
>>> /* Forget about errors when recovery has already been triggered */
>>> if (cq_host->recovery_halt)
>>