Mukesh Kacker [Wed, 8 Oct 2014 20:11:14 +0000 (13:11 -0700)]
RDS: move more queing for loopback connections to separate queue
All instances of processing for processing of
connect/re-connect/disconnect/reject instances in RDS
should use separate workqueue dedicated for processing
for (few) local loopback connections to reduce latency
so they do not get behind processing of large number of
remote connections. However, a few instances of such
processing do not. With this fix those are being changed
to use the workqueue dedicated for local connections.
This patch adds the following feature to ib_ipoib, rds_rdma, ib_core and
mlx4_core.
Adds a module parameter "module_unload_allowed". If the parameter is 1(the
default value), moudles can be unloaded(same behavior as before); other-
wise if it's 0, the module is not allowed to be unloaded. The paramter can't
be changed when module is loaded until the module is unloaded(if it can be).
default values:
ib_ipoib: 1 for YES
rds_rdma: 0 for NO
ib_core: 1 for YES
mlx4_core: 0 for NO
Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Acked-by: Joe Jin <joe.jin@oracle.com> Acked-by: Todd Vierling <todd.vierling@oracle.com> Acked-by: Yuval Shaia <yuval.shaia@oracle.com> Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
(cherry picked from commit cf1a00039e6fea116e9ea7c82f55ee3ee5319cec)
rds: fix NULL pointer dereference panic during rds module unload
This issue reported happens during an unload of rds module with rds
reconnect timeout worker scheduled to execute in the future, and rds
module unloaded earlier than that. rds reconnect timeout worker was
introduced by 8991a87c6c3fc8b17383a140bd6f15a958e31298 ( RDS: SA query
optimization) commit. Fix is to flush/cancel any reconnect timeout
workers while performing rds connections destroy which is done during
module unload.
Mukesh Kacker [Wed, 13 Aug 2014 20:02:06 +0000 (13:02 -0700)]
RDS:active bonding: disable failover across HCAs(failover groups)
Disable experimental code in RDS active bonding which performs
failovers across "failover groups" (HCAs). It causes
instabilities for some applications.
RDS/IB: active bonding - failover down interfaces on reboot.
RDS active bonding detects port down transitions from
active ports notified by events (and performs failover
when notified) but does not detect ports which are down
at boot time.
Changes involve tracking hw port, link layer, and netdev
layer up/down status separately and the aggregate port UP
status is deduced when ALL layers are UP and DOWN is deduced
when any one layer goes down.
A delayed task is scheduled to run at module load time after
a max delay OR on a sysctl trigger from init script after all
devices that are active are brought up. The ports found be
DOWN are failed over.
Mukesh Kacker [Sat, 21 Jun 2014 01:42:23 +0000 (18:42 -0700)]
RDS/IB: Remove dangling rcu_read_unlock() and other cleanups
Delete dangling rcu_read_unlock() which was left behind
when matching rcu_read_lock() and enclosed code was
removed in commit 538f5d0dfa704f4dcb4afa80a1d01b1317b9cd65
Shamir Rabinovitch [Sat, 28 Jun 2014 23:25:16 +0000 (16:25 -0700)]
rds: new extension header: rdma bytes
Introduce a new extension header type RDSV3_EXTHDR_RDMA_BYTES for
an RDMA initiator to exchange rdma byte counts to its target.
Add new flag to RDS header: RDS_FLAG_EXTHDR_EXTENSION
Add new extension to RDS header: rds_ext_header_rdma_bytes
Please note:
Linux RDS and Solaris RDS have miss match in header flags. Solaris
RDS assigned flag 0x08 to RDS_FLAG_EXTHDR_EXTENSION.
Linux alredy use 0x08 for flag RDS_FLAG_HB_PING.
This patch require the below fix from the Solaris side:
BUG 19065367 - unified RDSV3_EXTHDR_RDMA_BYTES with Linux
RDS: Ensure non-zero SL uses correct path before lane 0 connection is dropped
There is an issue with the following scenario:
* if non-zero lane is going down first with send completion error 12
* Before lane 0 connection goes down, the peer initiates connection request with
the non-zero lane
* This non-zero lane connection request may be using old ARP entries of lane 0
This also fixes race condition between connection establishment and drop for
following scenario:
* non-zero lane connection dropped
* non-zero connection is initiated and this time it finds proper route and
connection request goes through.
* before non-zero lane connection is established at RDS layer,
zero lane connection is getting dropped.
* now this zero-lane connection will drop non-zero lane connection as well
(with the assumption that non-zero lane did not find proper route).
* when non-zero lane connection establishment event is received (REP packet),
we have a race between connection establishment event on one CPU and
connection drop on other CPU.
Signed-off-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Wengang Wang <wen.gang.wang@oracle.com> Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
(cherry picked from commit 92ad60f3efabfcd9eb40e983a131c730829c0d90)
Mukesh Kacker [Sat, 31 May 2014 00:44:16 +0000 (17:44 -0700)]
RDS: active bonding - ports may not failback if all ports go down
When active bonding is enabled, ports failover and failback if
ports are enabled/disabled. If ALL ports go down, then no
failover happens since there is no available port to failover to.
In that case, some ports not hosting any migrated interfaces are
not resurrected.
Fix resurrects ports when they dont have an IP address set and
are failing back and RDS active bonding has state on them. Some,
debug/log messages are also improved such as to indicated when
a failover fails to happen.
Bang Nguyen [Wed, 16 Apr 2014 20:56:02 +0000 (13:56 -0700)]
RDS: SA query optimization
SA query optimization
The fact is all QoS lanes share the same physical path
b/w an IP pair. The only difference is the service level
that affects the quality of service for each lane. With
that, we have the following optimization:
1. Lane 0 to issue SA query request to the SM. All other
lanes will wait for lane 0 to finish route resolution,
then copy in the resolved path and fill in its service
level.
2. One-side reconnect to reduce reconnect racing, thus
further reducing the number of SA queries to the SM.
Reducing brownout for non-zero lanes
In some case, RDMA CM is delaying the disconnect event
after switch/node failure and this is causing extra
brownout for RDS reconnection. The workaround is to have
lane 0 probe other lanes by sending a HB msg. If the lane
is down, this will cause a send completion error and an
immediate reconnect.
Bang Nguyen [Wed, 7 May 2014 21:48:51 +0000 (14:48 -0700)]
RDS: Remove cond_resched() in RX tasklet
Re-install the base fix 17829338 and replace
spin_lock_irqsave(rx_lock)/spin_unlock_ireqrestore(rx_lock) with
spin_lock_bh(rx_lock)/spin_unlock_bh(rx_lock) to resolve bugs 18413711
and 18461816. rx_lock is used to prevent concurrent reaping b/w the
RX tasklet and worker.
Commit 6cf7cc30 "rds: dynamic active bonding configuration" introduced
a regression. Late joining IB interfaces were not configured
correctly. When active bonding configuration shows both interfaces
down, IP failover was not happening. This patch fixes this issue.
Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com> Signed-off-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
(cherry picked from commit 12755dbf7b4adc8ea2a935900b81c384731f6fff) Signed-off-by: Jerry Snitselaar <jerry.snitselaar@oracle.com>
(cherry picked from commit 12eebc4b9e28c3899089277ff9725bdcff1829aa)
This fix addresses the issue with the idled QoS connection not getting
disconnect event when the remote peer reboots. This is causing delayed
reconnect, hence application brownout when the peer comes online. The fix was
to proactively drop and reconnect them when the base lane is going through
the reconnect to the reboot peer, in effect forcing all the lanes to go
through the reconnect at the same time.
Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com> Signed-off-by: Chien-Hua Yen <chien.yen@oracle.com>
(cherry picked from commit f51ccefb3a0b9485da5cc5f66bb1e311f61bd70b)
Signed-off-by: Chien-Hua Yen <chien.yen@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
(cherry picked from commit 5cad478d7148ac4b6fc2d6eb78d6bf5a576d69e1)
Signed-off-by: Giri Adari <giri.adari@oracle.com> Signed-off-by: Richard Frank <richard.frank@oracle.com> Signed-off-by: Chien-Hua Yen <chien.yen@oracle.com>
(cherry picked from commit 7b66ddd7f6a5b023191d74949fab41af245775a3) Signed-off-by: Jerry Snitselaar <jerry.snitselaar@oracle.com>
Conflicts:
net/rds/rds.h
(cherry picked from commit 0373566ba0d74f655ae83e09748f7cc8d553f351)
Bang Nguyen [Tue, 20 Aug 2013 14:27:21 +0000 (07:27 -0700)]
RDS: double free rdma_cm_id
RDS currently offloads rdma_destroy_id() to an aux thread as part of the
connection shutdown. This was to workaround a bug in which rdma_destroy_id()
could block and cause RDS reconnect to hang. By queuing the rdma_destroy_id()
work, we unfortunately open up a timing window in which the pending
CMA_ADDR_QUERY request might not get canceled right away and race with
rdma_destroy_id().
In this case, rdma_destroyed_id() gets called and frees the cm id. Then,
CMA_ADDR_QUERY completes and calls RDS event handler which calls
rds_resolve_route on the destroyed cm id. The event handler returns failure
which causes RDMA CM to call rdma_destroy_id() again on the same cm id!
Hence the problem.
Since the rdma_destroy_id() bug has been fixed by MLX to offload the blocking
operation to the worker thread, RDS no longer needs to queue up
rdma_destroy_id(). This closes up the window above and fixes the problem.
Bang Nguyen [Sat, 17 Aug 2013 04:41:25 +0000 (21:41 -0700)]
RDS: Reconnect stalls for 15s
On Switch reboot, both end nodes would try to reconnect at the same time.
This can cause a race b/w the gratuitous ARP and the IP resolution,
resulting in a path to a down port. The CONNECT request sent on this path
is stuck until the 15s timeout at which time the connection is dropped
and re-established.
The fix was to indroduce a reconnect delay b/w the ARP and the reconnect
to minimize the race and avoid the 15s timeout.
Venkat Venkatsubra [Thu, 8 Aug 2013 05:15:05 +0000 (22:15 -0700)]
RDS: added stats to track and display receive side memory usage
Added these stats:
1. per-connection stat for number of receive buffers in cache
2. global stat for the same across all connections
3. number of bytes in socket receive buffer
Since stats are implemented using per-cpu variables and RDS currently
does unsigned arithmetic to add them up, separate counters (one for
addition and one for subtraction) are used for (2) and (3).
In the future we might change it to signed computation.
Bang Nguyen [Thu, 15 Aug 2013 02:10:00 +0000 (19:10 -0700)]
RDS: RDS reconnect stalls
After successfully negiotiating the version at lower protocol, RDS incorrectly
set the proposed version to the higher protocol, causing the subsequent
reconnect to stall.
The fix was not to change the proposed version after the initial connection
setup.
Ahmed Abbas [Thu, 18 Jul 2013 23:59:59 +0000 (16:59 -0700)]
add NETFILTER suppport
Orabug: 17082619
Adds the ability for the RDS code to support the NETFILTER kernel interfaces.
This allows for packet inspection, modification, and potential redirection as
the packets flow through the lower layers of the RDS code.
Jay Fenlason (fenlason@redhat.com) found a bug,
that recvfrom() on an RDS socket can return the contents of random kernel
memory to userspace if it was called with a address length larger than
sizeof(struct sockaddr_in).
rds_recvmsg() also fails to set the addr_len paramater properly before
returning, but that's just a bug.
There are also a number of cases wher recvfrom() can return an entirely bogus
address. Anything in rds_recvmsg() that returns a non-negative value but does
not go through the "sin = (struct sockaddr_in *)msg->msg_name;" code path
at the end of the while(1) loop will return up to 128 bytes of kernel memory
to userspace.
And I write two test programs to reproduce this bug, you will see that in
rds_server, fromAddr will be overwritten and the following sock_fd will be
destroyed.
Yes, it is the programmer's fault to set msg_namelen incorrectly, but it is
better to make the kernel copy the real length of address to user space in
such case.
How to run the test programs ?
I test them on 32bit x86 system, 3.5.0-rc7.
4 you will see something like:
server is waiting to receive data...
old socket fd=3
server received data from client:data from client
msg.msg_namelen=32
new socket fd=-1067277685
sendmsg()
: Bad file descriptor
printf("server is waiting to receive data...\n");
msg.msg_name = &fromAddr;
/*
* I add 16 to sizeof(fromAddr), ie 32,
* and pay attention to the definition of fromAddr,
* recvmsg() will overwrite sock_fd,
* since kernel will copy 32 bytes to userspace.
*
* If you just use sizeof(fromAddr), it works fine.
* */
msg.msg_namelen = sizeof(fromAddr) + 16;
/* msg.msg_namelen = sizeof(fromAddr); */
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
msg.msg_iov->iov_base = recvBuffer;
msg.msg_iov->iov_len = 128;
msg.msg_control = 0;
msg.msg_controllen = 0;
msg.msg_flags = 0;
while (1) {
printf("old socket fd=%d\n", sock_fd);
if (recvmsg(sock_fd, &msg, 0) == -1) {
perror("recvmsg() error\n");
close(sock_fd);
exit(1);
}
printf("server received data from client:%s\n", recvBuffer);
printf("msg.msg_namelen=%d\n", msg.msg_namelen);
printf("new socket fd=%d\n", sock_fd);
strcat(recvBuffer, "--data from server");
if (sendmsg(sock_fd, &msg, 0) == -1) {
perror("sendmsg()\n");
close(sock_fd);
exit(1);
}
}
close(sock_fd);
return 0;
}
Signed-off-by: Weiping Pan <wpan@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
(cherry picked from commit eb3ccc4c696e5c4a10d324886fd061ea88bab6c4)
Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com> Acked-by: Zheng Li <zheng.x.li@oracle.com> Signed-off-by: Jerry Snitselaar <jerry.snitselaar@oracle.com>
(cherry picked from commit 78b7d86911046c3a10ffa52d90f4f1a4523d7ac3)
when rds_ib_remove_one return, driver's mlx4_ib_removeone
function destroy ib_device, so we must clear rds_ibdev->dev
to NULL, or will cause crash when rds connection be released,
at the moment rds_ib_dev_free through ib_device
.i.e rds_ibdev->dev to release mr and fmr, reusing the
released ib_device will cause crash.
RDS: make sure rds_ib_remove_one() returns only after the device is freed.
This is to avoid possible race condition in which rds_ib_remove_one() returns
prematurely and IB removes the underlying device. RDS later tries to free the
device and trips over.
"When fed mangled socket data, rds will trust what userspace gives it,
and tries to allocate enormous amounts of memory larger than what
kmalloc can satisfy."
Reported-by: Dave Jones <davej@redhat.com> Cc: Dave Jones <davej@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Cong Wang <amwang@redhat.com> Acked-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Guangyu Sun <guangyu.sun@oracle.com>
(cherry picked from commit 1524f0a4e3e23b3c8b4235eb7d9932129cc0006b)
jeff.liu [Mon, 8 Oct 2012 18:57:27 +0000 (18:57 +0000)]
RDS: fix rds-ping spinlock recursion
This is the revised patch for fixing rds-ping spinlock recursion
according to Venkat's suggestions.
RDS ping/pong over TCP feature has been broken for years(2.6.39 to
3.6.0) since we have to set TCP cork and call kernel_sendmsg() between
ping/pong which both need to lock "struct sock *sk". However, this
lock has already been hold before rds_tcp_data_ready() callback is
triggerred. As a result, we always facing spinlock resursion which
would resulting in system panic.
Given that RDS ping is only used to test the connectivity and not for
serious performance measurements, we can queue the pong transmit to
rds_wq as a delayed response.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com> CC: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> CC: David S. Miller <davem@davemloft.net> CC: James Morris <james.l.morris@oracle.com> Signed-off-by: Jie Liu <jeff.liu@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 5175a5e76bbdf20a614fb47ce7a38f0f39e70226)
Signed-off-by: Jerry Snitselaar <jerry.snitselaar@oracle.com>
Conflicts:
net/rds/send.c
rds: UNDO reverts done for rebase code to compile with Linux 4.1 APIs
Commit 163377dd82f2d81809aabe736a2e0ea515055a69 does reverts
to common ancestor of upstream and UEK2 to rebase UEK2 patches
for net/rds. This commit undoes reverts needed to compile to
Linux 4.0 APIs.
UNDO Revert "net: Replace get_cpu_var through this_cpu_ptr" for net/rds
This commit does UNDO of revert of commit 903ceff7ca7b4d80c083a80ee5163b74e9fa359f for net/rds.
UNDO Revert "net: introduce helper macro for_each_cmsghdr" for net/rds
This commit does UNDO of revert of commit f95b414edb18de59940dcebbefb49cf25c6d505c for net/rds
UNDO Revert "net: Remove iocb argument from sendmsg and recvmsg" for net/rds
This commit does UNDO of revert of commit 1b784140474e4fc94281a49e96c67d29df0efbde for net/rds.
These commits were reverted earlier to rebase unmodified UEK2 RDS code
(UNDO needed to compile to new Linux 4.1 kernel APIs - changed *after* Linux 3.18)
Dotan Barak [Thu, 7 Jun 2012 05:56:34 +0000 (08:56 +0300)]
RDS: fixed compilation warnings
Fixed the following compilation warnings:
net/rds/send.c: In function 'rds_send_xmit':
net/rds/send.c:299: warning: suggest parentheses around && within ||
net/rds/rdma.c: In function 'rds_cmsg_rdma_dest':
net/rds/rdma.c:697: warning: format '%Lx' expects type 'long long unsigned int', but argument 2 has type 'u32'
net/rds/ib_recv.c: In function 'rds_ib_srqs_init':
net/rds/ib_recv.c:1570: warning: 'return' with no value, in function returning non-void
Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com> Signed-off-by: Dotan Barak <dotanb@dev.mellanox.co.il>
Bang Nguyen [Sun, 19 Feb 2012 20:19:57 +0000 (12:19 -0800)]
RDS Asynchronous Send support
1. Same behavior as RDMA send, i.e., generate notification on IB completion.
2. On error handling, connection is closed for traffic, i.e., new sends are
rejected and client retries
3. To guarantee ordering, all pending async (RDMA/bcopy) sends after the
failed send will also be aborted, and in the order that they were submitted.
4. Re-open connection for traffic after all the failed notifications have
been reaped by the client.
Dotan Barak [Wed, 15 Feb 2012 16:00:50 +0000 (18:00 +0200)]
rds: fix compilation warnings
net/rds/ib_recv.c: In function 'rds_ib_srq_event':
net/rds/ib_recv.c:1490: warning: too many arguments for format
net/rds/ib_recv.c:1484: warning: unused variable 'srq_attr'
net/rds/ib_recv.c: In function 'rds_ib_srq_init':
net/rds/ib_recv.c:1524: warning: passing argument 1 of 'ERR_PTR' makes
integer from pointer without a cast
include/linux/err.h:20: note: expected 'long int' but argument is of
type 'struct ib_srq *'
net/rds/ib_recv.c:1524: warning: format '%d' expects type 'int', but
argument 2 has type 'void *'
Bang Nguyen [Fri, 3 Feb 2012 16:10:06 +0000 (11:10 -0500)]
RDS Quality Of Service
RDS QoS is an extension of IB QoS to provide clients the ability to
segregate traffic flows and define policy to regulate them.
Internally, each traffic flow is represented by a connection with all of its
independent resources like that of a normal connection, and is
differentiated by service type. In other words, there can be multiple
connections between an IP pair and each supports a unique service type.
Service type (TOS) is user-defined and can be configured to satisfy certain
traffic requirements. For example, one service type may be configured for
high-priority low-latency traffic, another for low-priority high-bandwidth
traffic, and so on.
TOS is socket based. Client can set TOS on a socket via an IOCTL and must
do so before initiating any traffic. Once set, the TOS can not be changed.
Chris Mason [Fri, 3 Feb 2012 16:09:49 +0000 (11:09 -0500)]
RDS: make sure rds_send_xmit doesn't loop forever
rds_send_xmit can get stuck doing work on behalf of other senders. This
breaks out if we've been working too long. The work queue will get kicked
to finish off any other requests if our current process gives up.
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
Chris Mason [Fri, 3 Feb 2012 16:09:36 +0000 (11:09 -0500)]
RDS: don't test ring_empty or ring_low without locks held
The math in the ring functions can't be trusted unless you're either the only
person adding to the ring or the only person freeing from it. If there are no
locks held at all you can end up hitting bogus assertions around the ring counters.
This chnages the rds_ib_recv_refill code and the recv tasklet code to make sure
proper locks are held before we use rds_ib_ring_empty or rds_ib_ring_low
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
Venkat Venkatsubra [Fri, 3 Feb 2012 16:09:07 +0000 (11:09 -0500)]
RDS: avoid double destory of cm_id when rdms_resolve_route fails
It crashes in rds_ib_conn_shutdown because it was using a freed cm_id. The
cm_id had got freed quite a while back actually (more than 15 secs back) during
an earlier connect attempt.
This was the sequence of the earlier connect attempt: rds_ib_conn_connect calls
rdma_resolve_addr. The synchronous part of rdma_resolve_addr succeeds. But the
asynchronous part fails at some point. RDMA Connection Manager returns the
event RDMA_CM_EVENT_ADDR_RESOLVED. This part succeeds. Next, RDS calls
rdma_resolve_route from the rds_rdma_cm_event_handler. This fails. We return
this error back to the RDMA CM addr_handler which destroys the cm_id as
follows: addr_handler (cma.c):
Later when a new connect req comes in from the remote side, we shutdown this cm_id
and try to reconnect:
/*
* after 15 seconds, give up on existing connection
* attempts and make them try again. At this point
* it's no longer a race but something has gone
* horribly wrong
*/
if (now > conn->c_connection_start &&
now - conn->c_connection_start > 5) {
printk(KERN_CRIT "rds connection racing for 15s, forcing reset "
"connection %u.%u.%u.%u->%u.%u.%u.%u\n",
NIPQUAD(conn->c_laddr), NIPQUAD(conn->c_faddr));
rds_conn_drop(conn);
....
We crash during the shutdown.
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
Chris Mason [Fri, 3 Feb 2012 16:09:07 +0000 (11:09 -0500)]
RDS: make sure rds_send_drop_to properly takes the m_rs_lock
rds_send_drop_to is used during socket tear down to find all the
messages on the socket and clean them up. It can race with the
acking code unless it takes the m_rs_lock on each and every message.
This plugs a hole where we didn't take m_rs_lock on any message that
didn't have the RDS_MSG_ON_CONN set. Taking m_rs_lock avoids
double frees and other memory corruptions as the ack code trusts
the message m_rs pointer on a socket that had actually been freed.
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
Chris Mason [Fri, 3 Feb 2012 16:09:07 +0000 (11:09 -0500)]
RDS: kick krdsd to send congestion map updates
We can get into a deadlock on the recv spinlock because
congestion map updates can be sent in the recev path. This
pushes the work off to krdsd instead.
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>
Chris Mason [Fri, 3 Feb 2012 16:09:07 +0000 (11:09 -0500)]
RDS: add debuging code around sock_hold and sock_put.
RDS had a recent series of memory corruptions because of
a use-after-free and double-free of rds sockets. This adds
some debugging code around sock_put and sock_hold to
catch any similar bugs and spit out useful debugging info.
This is a temporary commit while customers try out our fix.
Signed-off-by: Chris Mason <chris.mason@oracle.com> Signed-off-by: Bang Nguyen <bang.nguyen@oracle.com>