[4.2.y-ckt stable] Patch "dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer" has been added to the 4.2.y-ckt tree

Kamal Mostafa kamal at canonical.com
Mon Mar 7 22:34:09 UTC 2016


This is a note to let you know that I have just added a patch titled

    dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer

to the linux-4.2.y-queue branch of the 4.2.y-ckt extended stable tree 
which can be found at:

    http://kernel.ubuntu.com/git/ubuntu/linux.git/log/?h=linux-4.2.y-queue

This patch is scheduled to be released in version 4.2.8-ckt5.

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 4.2.y-ckt tree, see
https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable

Thanks.
-Kamal

---8<------------------------------------------------------------

>From 3820d7e0632a5920bafcf45f4d6c692fb71f2109 Mon Sep 17 00:00:00 2001
From: Andy Shevchenko <andriy.shevchenko at linux.intel.com>
Date: Wed, 10 Feb 2016 15:59:42 +0200
Subject: dmaengine: dw: disable BLOCK IRQs for non-cyclic xfer

commit ee1cdcdae59563535485a5f56ee72c894ab7d7ad upstream.

The commit 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
re-enabled BLOCK interrupts with regard to make cyclic transfers work. However,
this change becomes a regression for non-cyclic transfers as interrupt counters
under stress test had been grown enormously (approximately per 4-5 bytes in the
UART loop back test).

Taking into consideration above enable BLOCK interrupts if and only if channel
is programmed to perform cyclic transfer.

Fixes: 2895b2cad6e7 ("dmaengine: dw: fix cyclic transfer callbacks")
Signed-off-by: Andy Shevchenko <andriy.shevchenko at linux.intel.com>
Acked-by: Mans Rullgard <mans at mansr.com>
Tested-by: Mans Rullgard <mans at mansr.com>
Acked-by: Viresh Kumar <viresh.kumar at linaro.org>
Signed-off-by: Vinod Koul <vinod.koul at intel.com>
Signed-off-by: Kamal Mostafa <kamal at canonical.com>
---
 drivers/dma/dw/core.c | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index a00d72f..f1c9e21 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -156,7 +156,6 @@ static void dwc_initialize(struct dw_dma_chan *dwc)

 	/* Enable interrupts */
 	channel_set_bit(dw, MASK.XFER, dwc->mask);
-	channel_set_bit(dw, MASK.BLOCK, dwc->mask);
 	channel_set_bit(dw, MASK.ERROR, dwc->mask);

 	dwc->initialized = true;
@@ -588,6 +587,9 @@ static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan *dwc,

 		spin_unlock_irqrestore(&dwc->lock, flags);
 	}
+
+	/* Re-enable interrupts */
+	channel_set_bit(dw, MASK.BLOCK, dwc->mask);
 }

 /* ------------------------------------------------------------------------- */
@@ -618,11 +620,8 @@ static void dw_dma_tasklet(unsigned long data)
 			dwc_scan_descriptors(dw, dwc);
 	}

-	/*
-	 * Re-enable interrupts.
-	 */
+	/* Re-enable interrupts */
 	channel_set_bit(dw, MASK.XFER, dw->all_chan_mask);
-	channel_set_bit(dw, MASK.BLOCK, dw->all_chan_mask);
 	channel_set_bit(dw, MASK.ERROR, dw->all_chan_mask);
 }

@@ -1256,6 +1255,7 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
 int dw_dma_cyclic_start(struct dma_chan *chan)
 {
 	struct dw_dma_chan	*dwc = to_dw_dma_chan(chan);
+	struct dw_dma		*dw = to_dw_dma(chan->device);
 	unsigned long		flags;

 	if (!test_bit(DW_DMA_IS_CYCLIC, &dwc->flags)) {
@@ -1264,7 +1264,12 @@ int dw_dma_cyclic_start(struct dma_chan *chan)
 	}

 	spin_lock_irqsave(&dwc->lock, flags);
+
+	/* Enable interrupts to perform cyclic transfer */
+	channel_set_bit(dw, MASK.BLOCK, dwc->mask);
+
 	dwc_dostart(dwc, dwc->cdesc->desc[0]);
+
 	spin_unlock_irqrestore(&dwc->lock, flags);

 	return 0;
--
2.7.0





More information about the kernel-team mailing list