From: Venkat Venkatsubra Date: Mon, 7 Jan 2019 13:04:16 +0000 (-0800) Subject: rds: RDS connection does not reconnect after CQ access violation error X-Git-Tag: v4.1.12-124.31.3~336 X-Git-Url: https://www.infradead.org/git/?a=commitdiff_plain;h=0a1367a626848da88e5ca58bc145312cc3efc67a;p=users%2Fjedix%2Flinux-maple.git rds: RDS connection does not reconnect after CQ access violation error The sequence that leads to this state is as follows. 1) First we see CQ error logged. Sep 29 22:32:33 dm54cel14 kernel: [471472.784371] mlx4_core 0000:46:00.0: CQ access violation on CQN 000419 syndrome=0x2 vendor_error_syndrome=0x0 2) That is followed by the drop of the associated RDS connection. Sep 29 22:32:33 dm54cel14 kernel: [471472.784403] RDS/IB: connection <192.168.54.43,192.168.54.1,0> dropped due to 'qp event' 3) We don't get the WR_FLUSH_ERRs for the posted receive buffers after that. 4) RDS is stuck in rds_ib_conn_shutdown while shutting down that connection. crash64> bt 62577 PID: 62577 TASK: ffff88143f045400 CPU: 4 COMMAND: "kworker/u224:1" #0 [ffff8813663bbb58] __schedule at ffffffff816ab68b #1 [ffff8813663bbbb0] schedule at ffffffff816abca7 #2 [ffff8813663bbbd0] schedule_timeout at ffffffff816aee71 #3 [ffff8813663bbc80] rds_ib_conn_shutdown at ffffffffa041f7d1 [rds_rdma] #4 [ffff8813663bbd10] rds_conn_shutdown at ffffffffa03dc6e2 [rds] #5 [ffff8813663bbdb0] rds_shutdown_worker at ffffffffa03e2699 [rds] #6 [ffff8813663bbe00] process_one_work at ffffffff8109cda1 #7 [ffff8813663bbe50] worker_thread at ffffffff8109d92b #8 [ffff8813663bbec0] kthread at ffffffff810a304b #9 [ffff8813663bbf50] ret_from_fork at ffffffff816b0752 crash64> It was stuck here in rds_ib_conn_shutdown for ever: /* quiesce tx and rx completion before tearing down */ while (!wait_event_timeout(rds_ib_ring_empty_wait, rds_ib_ring_empty(&ic->i_recv_ring) && (atomic_read(&ic->i_signaled_sends) == 0), msecs_to_jiffies(5000))) { /* Try to reap pending RX completions every 5 secs */ if (!rds_ib_ring_empty(&ic->i_recv_ring)) { spin_lock_bh(&ic->i_rx_lock); rds_ib_rx(ic); spin_unlock_bh(&ic->i_rx_lock); } } The recv ring was not empty. w_alloc_ptr = 560 w_free_ptr = 256 This is what Mellanox had to say: When CQ moves to error (e.g. due to CQ Overrun, CQ Access violation) FW will generate Async event to notify this error, also the QPs that tries to access this CQ will be put to error state but will not be flushed since we must not post CQEs to a broken CQ. The QP that tries to access will also issue an Async catas event. In summary we cannot wait for any more WR_FLUSH_ERRs in that state. Orabug: 28733324 Reviewed-by: Rama Nichanamatlu Signed-off-by: Venkat Venkatsubra Signed-off-by: Brian Maly --- diff --git a/net/rds/ib.h b/net/rds/ib.h index bd8eea05cb85..9545e6c61536 100644 --- a/net/rds/ib.h +++ b/net/rds/ib.h @@ -53,6 +53,7 @@ #define NUM_RDS_RECV_SG (PAGE_ALIGN(RDS_MAX_FRAG_SIZE) / PAGE_SIZE) #define RDS_IB_CLEAN_CACHE 1 +#define RDS_IB_CQ_ERR 2 #define RDS_IB_DEFAULT_FREG_PORT_NUM 1 #define RDS_CM_RETRY_SEQ_EN BIT(7) diff --git a/net/rds/ib_cm.c b/net/rds/ib_cm.c index 53debab967c8..a03d4d6cac83 100644 --- a/net/rds/ib_cm.c +++ b/net/rds/ib_cm.c @@ -319,6 +319,7 @@ void rds_ib_cm_connect_complete(struct rds_connection *conn, struct rdma_cm_even ic->i_sl = ic->i_cm_id->route.path_rec->sl; atomic_set(&ic->i_cq_quiesce, 0); + ic->i_flags &= ~RDS_IB_CQ_ERR; /* * Init rings and fill recv. this needs to wait until protocol negotiation @@ -438,8 +439,15 @@ static void rds_ib_cm_fill_conn_param(struct rds_connection *conn, static void rds_ib_cq_event_handler(struct ib_event *event, void *data) { - rdsdebug("event %u (%s) data %p\n", + struct rds_connection *conn = data; + struct rds_ib_connection *ic = conn->c_transport_data; + + pr_info("RDS/IB: event %u (%s) data %p\n", event->event, rds_ib_event_str(event->event), data); + + ic->i_flags |= RDS_IB_CQ_ERR; + if (waitqueue_active(&rds_ib_ring_empty_wait)) + wake_up(&rds_ib_ring_empty_wait); } static void rds_ib_cq_comp_handler_fastreg(struct ib_cq *cq, void *context) @@ -1409,11 +1417,15 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp) /* quiesce tx and rx completion before tearing down */ while (!wait_event_timeout(rds_ib_ring_empty_wait, - rds_ib_ring_empty(&ic->i_recv_ring) && - (atomic_read(&ic->i_signaled_sends) == 0) && - (atomic_read(&ic->i_fastreg_wrs) == - RDS_IB_DEFAULT_FREG_WR), - msecs_to_jiffies(5000))) { + (rds_ib_ring_empty(&ic->i_recv_ring) && + (atomic_read(&ic->i_signaled_sends) == 0) && + (atomic_read(&ic->i_fastreg_wrs) == + RDS_IB_DEFAULT_FREG_WR)) || + (ic->i_flags & RDS_IB_CQ_ERR), + msecs_to_jiffies(5000))) { + + if (ic->i_flags & RDS_IB_CQ_ERR) + break; /* Try to reap pending RX completions every 5 secs */ if (!rds_ib_ring_empty(&ic->i_recv_ring)) { @@ -1427,6 +1439,7 @@ void rds_ib_conn_path_shutdown(struct rds_conn_path *cp) tasklet_kill(&ic->i_rtasklet); atomic_set(&ic->i_cq_quiesce, 1); + ic->i_flags &= ~RDS_IB_CQ_ERR; /* first destroy the ib state that generates callbacks */ if (ic->i_cm_id->qp)