Re: [PATCH RFC V2 6/6] RDMA/rxe: Support PAGE_SIZE aligned MR

From: Bart Van Assche
Date: Fri Nov 03 2023 - 11:04:40 EST



On 11/3/23 02:55, Li Zhijian wrote:
- return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset, rxe_set_page);
+ for_each_sg(sgl, sg, sg_nents, i) {
+ u64 dma_addr = sg_dma_address(sg) + sg_offset;
+ unsigned int dma_len = sg_dma_len(sg) - sg_offset;
+ u64 end_dma_addr = dma_addr + dma_len;
+ u64 page_addr = dma_addr & PAGE_MASK;
+
+ if (sg_dma_len(sg) == 0) {
+ rxe_dbg_mr(mr, "empty SGE\n");
+ return -EINVAL;
+ }
+ do {
+ int ret = rxe_store_page(mr, page_addr);
+ if (ret)
+ return ret;
+
+ page_addr += PAGE_SIZE;
+ } while (page_addr < end_dma_addr);
+ sg_offset = 0;
+ }
+
+ return ib_sg_to_pages(ibmr, sgl, sg_nents, sg_offset_p, rxe_set_page);
}

Is this change necessary? There is already a loop in ib_sg_to_pages()
that splits SG entries that are larger than mr->page_size into entries
with size mr->page_size.

Bart.