Re: [PATCH v2 3/3] soc: qcom: rpmh: Conditionally check lockdep_assert_irqs_disabled()

From: Doug Anderson
Date: Wed Feb 03 2021 - 13:38:00 EST


Hi,

On Sun, Jan 24, 2021 at 10:21 PM Maulik Shah <mkshah@xxxxxxxxxxxxxx> wrote:
>
> @@ -136,6 +136,6 @@ void rpmh_rsc_invalidate(struct rsc_drv *drv);
> int rpmh_rsc_mode_solver_set(struct rsc_drv *drv, bool enable);
>
> void rpmh_tx_done(const struct tcs_request *msg, int r);
> -int rpmh_flush(struct rpmh_ctrlr *ctrlr);
> +int rpmh_flush(struct rpmh_ctrlr *ctrlr, bool from_last_cpu);

Given that you're touching this code, why not do the cleanup you
promised Stephen you'd do in April of 2020 [1]. Specifically rename
this function to something other than rpmh_flush().

[1] https://lore.kernel.org/r/11c37c89-aa1f-7297-9ecf-4d77a20deebd@xxxxxxxxxxxxxx/


> diff --git a/drivers/soc/qcom/rpmh-rsc.c b/drivers/soc/qcom/rpmh-rsc.c
> index 1c1f5b0..a67bcd6 100644
> --- a/drivers/soc/qcom/rpmh-rsc.c
> +++ b/drivers/soc/qcom/rpmh-rsc.c
> @@ -841,7 +841,8 @@ static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb,
> * CPU.
> */
> if (spin_trylock(&drv->lock)) {
> - if (rpmh_rsc_ctrlr_is_busy(drv) || rpmh_flush(&drv->client))
> + if (rpmh_rsc_ctrlr_is_busy(drv) ||
> + rpmh_flush(&drv->client, true))

I'll channel the spirit of Bjorn and say that it's better to blow over
the 80 column limit here and avoid wrapping to a new line.


> @@ -458,22 +458,33 @@ static int send_single(struct rpmh_ctrlr *ctrlr, enum rpmh_state state,
> * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes
> *
> * @ctrlr: Controller making request to flush cached data
> + * @from_last_cpu: Set if invoked from last cpu with irqs disabled
> *
> * Return:
> * * 0 - Success
> * * Error code - Otherwise
> */
> -int rpmh_flush(struct rpmh_ctrlr *ctrlr)
> +int rpmh_flush(struct rpmh_ctrlr *ctrlr, bool from_last_cpu)
> {
> struct cache_req *p;
> int ret = 0;
>
> - lockdep_assert_irqs_disabled();
> + /*
> + * rpmh_flush() can be called when we think we're running
> + * on the last CPU with irqs_disabled or when RPMH client
> + * explicitly requests to write sleep and wake data.
> + * (for e.g. when in solver mode clients can requests to flush)
> + *
> + * Conditionally check for irqs_disabled only when called
> + * from last cpu.
> + */
> +
> + if (from_last_cpu)
> + lockdep_assert_irqs_disabled();
>
> /*
> - * Currently rpmh_flush() is only called when we think we're running
> - * on the last processor. If the lock is busy it means another
> - * processor is up and it's better to abort than spin.
> + * If the lock is busy it means another transaction is on going,
> + * in such case it's better to abort than spin.
> */
> if (!spin_trylock(&ctrlr->cache_lock))
> return -EBUSY;

I think logically here you should only do the trylock if
"from_last_cpu". If you're not called "from_last_cpu" you should do a
normal spinlock.

Also: if you're not "from_last_cpu" you need to use the "irq" or
"irqsave" variants of the spinlock.

Also: if you're not "from_last_cpu" I think you somehow confirm that
we're in solver mode. The only time it's legal to call this when not
"from_last_cpu" is when we've previously set solver mode, right?
That's the thing that makes everything OK and fulfills all the
requirements talked about in rpmh-rsc.c like in the comments for
rpmh_rsc_write_ctrl_data() where we say:

* The caller must ensure that no other RPMH actions are happening and the
* controller is idle when this function is called since it runs lockless.

I know I told you in patch #1 that we shouldn't have two copies of the
"in_solver_mode" state variable and, on the surface, it seems like the
check I'm requesting would be hard to do. I _think_ the right thing
to do is actually to combine your rpmh_write_sleep_and_wake() and
rpmh_mode_solver_set() functions. One way to do this would be to just
have rpmh_write_sleep_and_wake() implicitly enter solver mode. Then
you could change rpmh_mode_solver_set() to just
rpmh_mode_solver_exit() and have it only used to exit solver mode.

-Doug