Re: [PATCH] dmaengine/ste_dma40: support pm in dma40

From: Vinod Koul
Date: Thu Nov 17 2011 - 00:28:27 EST


On Thu, 2011-11-17 at 10:34 +0530, Narayanan G wrote:
>
> > > + */
> > > + if (base->rev == 1)
> > > + return;
> > > +
> > > +
> > > + spin_lock_irqsave(&d40c->base->usage_lock, flags);
> > > +
> > > + d40_power_off(d40c->base, d40c->phy_chan->num);
> > if this does what it says then it is wrong.
> > power_off should be done in your suspend callbacks.
> > same for on as well!!
>
> Actually, we need to switch off the clocks for the event groups,
> that are not in use. Say, if only evengroup 2 is active, the other
> clocks can be switched off. The clocks for the unused event lines
> need not be on till the runtime suspend is called. May be, I should
> rename this function as d40_power_off_evt_grp().
>
> > > - d40c->busy = true;
> > > + if (!d40c->busy) {
> > > + d40_usage_inc(d40c);
> > > + d40c->busy = true;
> > > + }
> > well here is problem!
> > You don't need to check busy here, juts call pm_runtime_get(). Power
> on
> > will be take care if it requires resume in your resume callback. No
> need
> > to have checks of busy. You are not properly utilizing functionality
> > provided by runtime_pm
>
> I have this usage_inc() function mainly for switching on and off the
> clocks for the desired event groups. The busy check here is to ensure
> that we don't need to do the usage_inc() (clock management) in case it
> is
> already on.
> Is there a way to do this clock management at eventline granularity
> using the pm_runtime() framework?
runtime pm manages the device power management. It will call your
suspend/resume callbacks when your device is idle/active and manages the
device usage for you.
Within your device you need to do your own management, which should be
linked to channel being active. You should not club the two.

I am not sure if the clock framework can help you on this

--
~Vinod

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/