Re: [PATCH 1/4] cpufreq: qcom-nvmem: Enable virtual power domain devices

From: Stephan Gerhold
Date: Wed Oct 18 2023 - 03:13:35 EST


On Tue, Oct 17, 2023 at 11:50:21PM +0200, Ulf Hansson wrote:
> [...]
> > >
> > > *) The pm_runtime_resume_and_get() works for QCS404 as a fix. It also
> > > works fine when there is only one RPMPD that manages the performance
> > > scaling.
> > >
> >
> > Agreed.
> >
> > > **) In cases where we have multiple PM domains to scale performance
> > > for, using pm_runtime_resume_and_get() would work fine too. Possibly
> > > we want to use device_link_add() to set up suppliers, to avoid calling
> > > pm_runtime_resume_and_get() for each and every device.
> > >
> >
> > Hm. What would you use as "supplied" device? The CPU device I guess?
>
> The consumer would be the device that is used to probe the cpureq
> driver and the supplier(s) the virtual devices returned from genpd
> when attaching.
>
> >
> > I'm looking again at my old patch from 2020 where I implemented this
> > with device links in the OPP core. Seems like you suggested this back
> > then too :)
> >
> > https://lore.kernel.org/linux-pm/20200826093328.88268-1-stephan@xxxxxxxxxxx/
> >
> > However, for the special case of the CPU I think we don't gain any code
> > simplification from using device links. There will just be a single
> > resume of each virtual genpd device, as well as one put during remove().
> > Exactly the same applies when using device links, we need to set up the
> > device links once for each virtual genpd device, and clean them up again
> > during remove().
> >
> > Or can you think of another advantage of using device links?
>
> No, not at this point.
>
> So, in this particular case it may not matter that much. But when the
> number of PM domains starts to vary between platforms it could be a
> nice way to abstract some logic. I guess starting without using
> device-links and seeing how it evolves could be a way forward too.
>

Sounds good :)

> >
> > > ***) Due to the above, we don't need a new mechanism to avoid
> > > "caching" performance states for genpd. At least for the time being.
> > >
> >
> > Right. Given *) and **) I'll prepare a v2 of $subject patch with the
> > remove() cleanup fixed and an improved commit description.
> >
> > I'll wait for a bit in case you have more thoughts about the device
> > links.
>
> One more thing though that crossed my mind. In the rpmpd case, is
> there anything we need to care about during system suspend/resume that
> isn't already taken care of correctly?
>

No, I don't think so. The RPM firmware makes no difference between deep
cpuidle and system suspend. As long as we properly enter deep cpuidle as
part of s2idle, it will automatically release our votes for the
"active-only" (_AO) variant of the RPMPDs.

I'll send the adjusted v2 shortly for you to look at. :)

Thanks,
Stephan