Re: sata_nv and 2.6.24 (was Re: fixed a bug of adma in rhel4u5 withHDS7250SASUN500G.)

From: Robert Hancock
Date: Wed Jan 23 2008 - 20:54:38 EST


Jeff Garzik wrote:
Robert Hancock wrote:
Jeff Garzik wrote:
Ping... sata_nv status is still a bit open for 2.6.24, and I would like to move us forward a bit.

* Kuan's patch... it has been confirmed (and is needed), correct? can someone work up a good patch for 2.6.24? The only one I ever received was badly word-wrapped, and at the time, Robert seemed uncertain of it, so I waited.

I can get you one later today hopefully.

A question came up on this patch, whether it will cause problems with ATAPI mode - waiting for a response from the NVIDIA guys.



* ADMA ATAPI 4GB issues... playing tricks with the ordering of allocations and DMA masks is just way too fragile. We just cannot guarantee that all allocators work that way. The obvious solution to me seems to be hardcoding the consistent DMA mask to 32-bit, but using 64-bit for regular dma mask if-and-only-if ADMA is enabled.

That's not enough to fix the problem since there's issues with actual transfer data being allocated above 4GB as well, not just the consistent allocations (it appears that blk_queue_bounce_limit setting to 32-bit doesn't prevent this on x86_64). Either we play some funky games with changing the DMA mask of the entire device to 32-bit if either port is in ATAPI mode (which blew up when I tried it) or we add the ability to set the DMA mask independently on each port (like by setting the mask on the SCSI device and using that for DMA mapping instead) which requires core changes.

Its all funky games that no other driver is doing... There is one guaranteed to work scenario -- set all masks and bounce limits etc. to 32-bit. There is also one highly-likely-to-work scenario, disabling ADMA by default.

Sure, if you don't mind a potentially significant performance regression. All the DMA mask problems are due to the fact that the mask settings for both ports are ganged together on the PCI device. If we could set the DMA masks on the SCSI device or something else that was port-specific, and do the command DMA mapping against that device, then most of the wierdness goes away.

It does seem like we're starting to get a bit of NVIDIA interest in looking into ADMA issues, which is definitely welcome.



* it sure seems like there are other open sata_nv ADMA issues -- can we hard-confirm or deny this? bugzilla wasn't very helpful for me. It doesn't seem like we can disable ADMA (to solve those issues) and get enough test time in (which is what I said a week (or more?) ago too...)

The NCQ/non-NCQ command switching issue is still hitting some people (last I heard Kuan was looking into this), also there's a hotplug issue that Tejun reported..

The former implies we need to disable swncq for 2.6.24, if it's not stable yet.

Huh? Nothing to do with SWNCQ, which last I checked was still off by default.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/