Re: DMA mapping on SCSI device?

From: Robert Hancock
Date: Tue Jan 29 2008 - 21:01:49 EST


Luben Tuikov wrote:
--- On Mon, 1/28/08, Robert Hancock <hancockr@xxxxxxx> wrote:
The trick is that if an ATAPI device is connected, we (as
far as I'm aware) can't use ADMA mode, so we have to switch that
port into legacy mode.

Can you double check this with the HW architect of the
HW DMA engine of the ASIC?

Will do so. However, previous statements from NVIDIA fairly clearly indicate that this is the case.


This means it's only capable of 32-bit DMA.
However the other port on the controller may be connected to a hard drive and
therefore still capable of 64-bit DMA.

If this is indeed the case as you've presented it here,
it sounds like a HW shortcoming. I cannot see how the device
type (or protocol) dictate how the DMA engine operates.
They live in two different domains.

Well, there is an indirect link. The ADMA interface (which supports 64-bit DMA) cannot be used to issue ATAPI commands, so if an ATAPI device is connected we have to go to legacy mode, which supports only 32-bit DMA.

I'm not sure why ADMA mode doesn't support ATAPI. The only reason I can think of is that there's issues since ATAPI commands can potentially be of unpredictable transfer size. The "real" ADMA spec that the NVIDIA implementation is loosely based on does have some special "ignore excess" controls that don't seem to be in the NVIDIA version (or at least not to the knowledge I have on this hardware).

And yes, it is a rather unfortunate hardware shortcoming (presuming that it is entirely true).


The ideal solution would be to do mapping against a
different struct device for each port, so that we could maintain the proper
DMA mask for each of them at all times. However I'm not sure if
that's possible. The thought of using the SCSI struct device for DMA mapping was
brought up at one point.. any thoughts on that?

The reason for this is that the object that a struct scsi_dev
represents has nothing to do with HW DMA engines.

It looks like your current solution is correct and
x86_64's blk_queue_bounce_limit needs work.

Luben


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/