RE: [RFC v2 PATCH 5/7] dmaengine: xilinx_dma: Freeup active list based on descriptor completion bit

From: Radhey Shyam Pandey
Date: Fri Jun 11 2021 - 14:58:49 EST


> -----Original Message-----
> From: Lars-Peter Clausen <lars@xxxxxxxxxx>
> Sent: Thursday, April 15, 2021 12:56 PM
> To: Radhey Shyam Pandey <radheys@xxxxxxxxxx>; vkoul@xxxxxxxxxx;
> robh+dt@xxxxxxxxxx; Michal Simek <michals@xxxxxxxxxx>
> Cc: dmaengine@xxxxxxxxxxxxxxx; devicetree@xxxxxxxxxxxxxxx; linux-arm-
> kernel@xxxxxxxxxxxxxxxxxxx; linux-kernel@xxxxxxxxxxxxxxx; git
> <git@xxxxxxxxxx>
> Subject: Re: [RFC v2 PATCH 5/7] dmaengine: xilinx_dma: Freeup active list
> based on descriptor completion bit
>
> On 4/9/21 7:56 PM, Radhey Shyam Pandey wrote:
> > AXIDMA IP in SG mode sets completion bit to 1 when the transfer is
> > completed. Read this bit to move descriptor from active list to the
> > done list. This feature is needed when interrupt delay timeout and
> > IRQThreshold is enabled i.e Dly_IrqEn is triggered w/o completing
> > interrupt threshold.
> >
> > Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xxxxxxxxxx>
> > ---
> > - Check BD completion bit only for SG mode.
> > - Modify the logic to have early return path.
> > ---
> > drivers/dma/xilinx/xilinx_dma.c | 7 +++++++
> > 1 file changed, 7 insertions(+)
> >
> > diff --git a/drivers/dma/xilinx/xilinx_dma.c
> > b/drivers/dma/xilinx/xilinx_dma.c index 890bf46b36e5..f2305a73cb91
> > 100644
> > --- a/drivers/dma/xilinx/xilinx_dma.c
> > +++ b/drivers/dma/xilinx/xilinx_dma.c
> > @@ -177,6 +177,7 @@
> > #define XILINX_DMA_CR_COALESCE_SHIFT 16
> > #define XILINX_DMA_BD_SOP BIT(27)
> > #define XILINX_DMA_BD_EOP BIT(26)
> > +#define XILINX_DMA_BD_COMP_MASK BIT(31)
> > #define XILINX_DMA_COALESCE_MAX 255
> > #define XILINX_DMA_NUM_DESCS 512
> > #define XILINX_DMA_NUM_APP_WORDS 5
> > @@ -1683,12 +1684,18 @@ static void xilinx_dma_issue_pending(struct
> dma_chan *dchan)
> > static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan
> *chan)
> > {
> > struct xilinx_dma_tx_descriptor *desc, *next;
> > + struct xilinx_axidma_tx_segment *seg;
> >
> > /* This function was invoked with lock held */
> > if (list_empty(&chan->active_list))
> > return;
> >
> > list_for_each_entry_safe(desc, next, &chan->active_list, node) {
> > + /* TODO: remove hardcoding for axidma_tx_segment */
> > + seg = list_last_entry(&desc->segments,
> > + struct xilinx_axidma_tx_segment, node);
> > + if (!(seg->hw.status & XILINX_DMA_BD_COMP_MASK) &&
> chan->has_sg)
> > + break;
> > if (chan->has_sg && chan->xdev->dma_config->dmatype !=
> > XDMA_TYPE_VDMA)
> > desc->residue = xilinx_dma_get_residue(chan, desc);
>
> Since not all descriptors will be completed in this function the `chan->idle =
> true;` in xilinx_dma_irq_handler() needs to be gated on the active_list being
> empty.

Thanks for pointing it out. Agree to it, will fix it in the next version.
>
> xilinx_dma_complete_descriptor