From: Qing Huang Date: Thu, 18 May 2017 23:33:53 +0000 (-0700) Subject: RDMA/core: not to set page dirty bit if it's already set. X-Git-Tag: v4.1.12-105.0.20170705_2000~7 X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=cceed4280c0ba1fdc4ae6fd8f315443a6593cdce;p=users%2Fjedix%2Flinux-maple.git RDMA/core: not to set page dirty bit if it's already set. This change will optimize kernel memory deregistration operations. __ib_umem_release() used to call set_page_dirty_lock() against every writable page in its memory region. Its purpose is to keep data synced between CPU and DMA device when swapping happens after mem deregistration ops. Now we choose not to set page dirty bit if it's already set by kernel prior to calling __ib_umem_release(). This reduces memory deregistration time by half or even more when we ran application simulation test program. Orabug: 24313031 Signed-off-by: Qing Huang Signed-off-by: Doug Ledford (cherry picked from upstream commit 53376fedb9da54c0d3b0bd3a6edcbeb681692909) Reviewed-by: Yuval Shaia --- diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 522b2e99a021..a5000539aa83 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -67,7 +67,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg(umem->sg_head.sgl, sg, umem->npages, i) { page = sg_page(sg); - if (umem->writable && dirty) + if (!PageDirty(page) && umem->writable && dirty) set_page_dirty_lock(page); put_page(page); }