Re: [PATCH v4] iommu: Optimise PCI SAC address trick

From: John Garry
Date: Tue Apr 18 2023 - 06:21:10 EST


On 18/04/2023 10:23, Vasant Hegde wrote:
[ 172.017120] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.022955] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.028720] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.031815] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.031816] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.038727] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.038726] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.038917] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.038968] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.038970] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.039007] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.039091] nvme 0000:41:00.0: Using 64-bit DMA addresses
[ 172.039102] nvme 0000:41:00.0: Using 64-bit DMA addresses

Otherwise patch worked fine for us.

Hi Vasant,

JFYI, Since you are using NVMe, you could also alternatively try something like which I did for some SCSI storage controller drivers to limit the request_queue max_sectors soft limit, like:

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index c2730b116dc6..0a99c9a629c9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1814,6 +1814,8 @@ static void nvme_set_queue_limits(struct nvme_ctrl *ctrl,

max_segments = min_not_zero(max_segments, ctrl->max_segments);
blk_queue_max_hw_sectors(q, ctrl->max_hw_sectors);
q->limits.max_sectors = min(q->limits.max_hw_sectors,
+ (unsigned int)dma_opt_mapping_size(ctrl->dev));
blk_queue_max_segments(q, min_t(u32, max_segments, USHRT_MAX));
}
blk_queue_virt_boundary(q, NVME_CTRL_PAGE_SIZE - 1);
lines 1-25/25 (END)

dma_opt_mapping_size() will return the max IOVA caching size for iommu dma ops, so this would mean that we avoid alloc'ing and free'ing IOVAs
at such a high rate (which I assume was your problem).

Thanks,
John