]> www.infradead.org Git - users/willy/xarray.git/commitdiff
RDMA/rxe: Fix data copy for IB_SEND_INLINE
authorHonggang LI <honggangli@163.com>
Thu, 16 May 2024 09:50:52 +0000 (17:50 +0800)
committerLeon Romanovsky <leon@kernel.org>
Thu, 30 May 2024 13:20:18 +0000 (16:20 +0300)
For RDMA Send and Write with IB_SEND_INLINE, the memory buffers
specified in sge list will be placed inline in the Send Request.

The data should be copied by CPU from the virtual addresses of
corresponding sge list DMA addresses.

Cc: stable@kernel.org
Fixes: 8d7c7c0eeb74 ("RDMA: Add ib_virt_dma_to_page()")
Signed-off-by: Honggang LI <honggangli@163.com>
Link: https://lore.kernel.org/r/20240516095052.542767-1-honggangli@163.com
Reviewed-by: Zhu Yanjun <yanjun.zhu@linux.dev>
Reviewed-by: Li Zhijian <lizhijian@fujitsu.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
drivers/infiniband/sw/rxe/rxe_verbs.c

index c7d4d8ab5a0941b1fe5b3c3a0e806dbdc50aff16..de6238ee4379b766adc128656c12dba13d540c44 100644 (file)
@@ -812,7 +812,7 @@ static void copy_inline_data_to_wqe(struct rxe_send_wqe *wqe,
        int i;
 
        for (i = 0; i < ibwr->num_sge; i++, sge++) {
-               memcpy(p, ib_virt_dma_to_page(sge->addr), sge->length);
+               memcpy(p, ib_virt_dma_to_ptr(sge->addr), sge->length);
                p += sge->length;
        }
 }