RE: [PATCH v4 3/5] swiotlb: Honour dma_alloc_coherent() alignment in swiotlb_alloc()

From: Michael Kelley
Date: Wed Feb 21 2024 - 18:36:50 EST


From: Will Deacon <will@xxxxxxxxxx> Sent: Wednesday, February 21, 2024 3:35 AM
>
> core-api/dma-api-howto.rst states the following properties of
> dma_alloc_coherent():
>
> | The CPU virtual address and the DMA address are both guaranteed to
> | be aligned to the smallest PAGE_SIZE order which is greater than or
> | equal to the requested size.
>
> However, swiotlb_alloc() passes zero for the 'alloc_align_mask'
> parameter of swiotlb_find_slots() and so this property is not upheld.
> Instead, allocations larger than a page are aligned to PAGE_SIZE,
>
> Calculate the mask corresponding to the page order suitable for holding
> the allocation and pass that to swiotlb_find_slots().
>
> Cc: Christoph Hellwig <hch@xxxxxx>
> Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx>
> Cc: Robin Murphy <robin.murphy@xxxxxxx>
> Cc: Dexuan Cui <decui@xxxxxxxxxxxxx>
> Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers")
> Reviewed-by: Petr Tesarik <petr.tesarik1@xxxxxxxxxxxxxxxxxxx>
> Signed-off-by: Will Deacon <will@xxxxxxxxxx>
> ---
> kernel/dma/swiotlb.c | 4 +++-
> 1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index ab7fbb40bc55..c20324fba814 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -1633,12 +1633,14 @@ struct page *swiotlb_alloc(struct device *dev,
> size_t size)
> struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
> struct io_tlb_pool *pool;
> phys_addr_t tlb_addr;
> + unsigned int align;
> int index;
>
> if (!mem)
> return NULL;
>
> - index = swiotlb_find_slots(dev, 0, size, 0, &pool);
> + align = (1 << (get_order(size) + PAGE_SHIFT)) - 1;
> + index = swiotlb_find_slots(dev, 0, size, align, &pool);
> if (index == -1)
> return NULL;
>
> --
> 2.44.0.rc0.258.g7320e95886-goog

Reviewed-by: Michael Kelley <mhklinux@xxxxxxxxxxx>