Re: [PATCH v2] swiotlb: avoid potential left shift overflow

From: Dongli Zhang
Date: Fri Aug 19 2022 - 09:44:41 EST


I also encountered this when sending out another version of the 64-bit swiotlb.

https://lore.kernel.org/all/20220609005553.30954-8-dongli.zhang@xxxxxxxxxx/

Unfortunately, I could not find an environment (e.g., powerpc) to allocate more
than 4G until swiotlb supports 64-bit.

Although xen supports 64-bit, but the hypervisor side limits the max to < 4G.

Dongli Zhang

On 8/19/22 1:45 AM, Chao Gao wrote:
> The second operand passed to slot_addr() is declared as int or unsigned int
> in all call sites. The left-shift to get the offset of a slot can overflow
> if swiotlb size is larger than 4G.
>
> Convert the macro to an inline function and declare the second argument as
> phys_addr_t to avoid the potential overflow.
>
> Fixes: 26a7e094783d ("swiotlb: refactor swiotlb_tbl_map_single")
> Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
> ---
> kernel/dma/swiotlb.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index c5a9190b218f..41ea9fb3efe1 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -579,7 +579,10 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
> }
> }
>
> -#define slot_addr(start, idx) ((start) + ((idx) << IO_TLB_SHIFT))
> +static inline phys_addr_t slot_addr(phys_addr_t start, phys_addr_t idx)
> +{
> + return start + (idx << IO_TLB_SHIFT);
> +}
>
> /*
> * Carefully handle integer overflow which can occur when boundary_mask == ~0UL.
>
> base-commit: 5c850d31880e00f063fa2a3746ba212c4bcc510f
>