Re: Tearing down DMA transfer setup after DMA client has finished

From: Mason
Date: Fri Nov 25 2016 - 10:05:54 EST


On 25/11/2016 15:17, Russell King - ARM Linux wrote:
> On Fri, Nov 25, 2016 at 02:03:20PM +0000, Måns Rullgård wrote:
>> Russell King - ARM Linux <linux@xxxxxxxxxxxxxxx> writes:
>>
>>> On Fri, Nov 25, 2016 at 01:50:35PM +0000, Måns Rullgård wrote:
>>>> Russell King - ARM Linux <linux@xxxxxxxxxxxxxxx> writes:
>>>>> It would be unfair to augment the API and add the burden on everyone
>>>>> for the new API when 99.999% of the world doesn't require it.
>>>>
>>>> I don't think making this particular dma driver wait for the descriptor
>>>> callback to return before reusing a channel quite amounts to a horrid
>>>> hack. It certainly wouldn't burden anyone other than the poor drivers
>>>> for devices connected to it, all of which are specific to Sigma AFAIK.
>>>
>>> Except when you stop to think that delaying in a tasklet is exactly
>>> the same as randomly delaying in an interrupt handler - the tasklet
>>> runs on the return path back to the parent context of an interrupt
>>> handler. Even if you sleep in the tasklet, you're sleeping on behalf
>>> of the currently executing thread - if it's a RT thread, you effectively
>>> destroy the RT-ness of the thread. Let's hope no one cares about RT
>>> performance on that hardware...
>>
>> That's why I suggested to do this only if the needed delay is known to
>> be no more than a few bus cycles. The completion callback is currently
>> the only post-transfer interaction we have between the dma and device
>> drivers. To handle an arbitrarily long delay, some new interface will
>> be required.
>
> And now we're back at the point I made a few emails ago about undue
> burden which is just about quoted above...

I've had several talks with the HW dev, and I don't think they
anticipated the need to mux the 3 channels. In their minds,
customers would choose at most 3 devices to support, and
assign one channel to each device statically.

In fact, in tango4, supported devices are:
A) NAND Flash controllers 0 and 1
NB: the upstream driver only uses controller 0
B) IDE or SATA controllers 0 and 1
C) a few crypto HW blocks which do not work as expected (unused)

Customers typically use 1 channel for NAND, maybe 1 for SATA,
and 1 channel remains unused.

I understand the desire to solve the general case in the
driver, but actual use-cases are much more trivial.

Regards.