From: Ajaykumar Hotchandani Date: Wed, 27 Feb 2019 22:53:57 +0000 (-0800) Subject: xve: arm ud tx cq to generate completion interrupts X-Git-Tag: v4.1.12-124.31.3~133 X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=b46ec5bb492dedcd6798ab9c9fd43f2fd3373b08;p=users%2Fjedix%2Flinux-maple.git xve: arm ud tx cq to generate completion interrupts IPoIB polls for UD send cq for every 16th post_send() request to reduce interrupt count; and it does not arm UD send cq (16 is controlled by MAX_SEND_CQE variable) XVE has followed IPoIB methodology in terms of handling UD send cq; however, it missed to poll send cq after certain number of iterations. This makes freeing of resources related to work request unreliable since completion arrival is not controlled. This caused problem for live migration; since initial UDP and ICMP skbs which are using UD work requests are not getting freed. And, xenwatch process is getting stuck on waiting for these skbs to be freed. This patch does following: - arm send cq at initialization. This will generate interrupt for initial ud send requests. - Once polling of send cq is completed, arm send cq again to generate interrupt whenever next cqe arrives. I'm going back to interrupt mechanism, since UD workload for xve is extremely limited. And, I don't expect to generate interrupt flood here. And, I don't want to miss out on freeing of skb (for example, if scenario ends up as, only 10 post_send() are attempted for UD QP; and after that, we try to live migrate that VM, we may miss completion if our logic is, poll CQ at every 16th post_send() iteration) Orabug: 28267050 Signed-off-by: Ajaykumar Hotchandani Reviewed-by: Chien Yen Signed-off-by: Brian Maly --- diff --git a/drivers/infiniband/ulp/xsigo/xve/xve_ib.c b/drivers/infiniband/ulp/xsigo/xve/xve_ib.c index d5d4b2a86b47..b8f0bb55cbce 100644 --- a/drivers/infiniband/ulp/xsigo/xve/xve_ib.c +++ b/drivers/infiniband/ulp/xsigo/xve/xve_ib.c @@ -558,6 +558,10 @@ int poll_tx(struct xve_dev_priv *priv) __func__, n); } + /* Since, we are not going to be polled again, arm cq */ + if (n != MAX_SEND_CQE) + ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP); + return n == MAX_SEND_CQE; } diff --git a/drivers/infiniband/ulp/xsigo/xve/xve_verbs.c b/drivers/infiniband/ulp/xsigo/xve/xve_verbs.c index 2afcf7362990..d1e139286a06 100644 --- a/drivers/infiniband/ulp/xsigo/xve/xve_verbs.c +++ b/drivers/infiniband/ulp/xsigo/xve/xve_verbs.c @@ -208,6 +208,8 @@ int xve_transport_dev_init(struct net_device *dev, struct ib_device *ca) if (ib_req_notify_cq(priv->recv_cq, IB_CQ_NEXT_COMP)) goto out_free_send_cq; + if (ib_req_notify_cq(priv->send_cq, IB_CQ_NEXT_COMP)) + goto out_free_send_cq; coal = kzalloc(sizeof(*coal), GFP_KERNEL); if (coal) {