]> www.infradead.org Git - users/dwmw2/openconnect.git/commitdiff
vhost: Avoid TX queue when writing directly is faster
authorDavid Woodhouse <dwmw2@infradead.org>
Tue, 29 Jun 2021 12:20:34 +0000 (13:20 +0100)
committerDavid Woodhouse <dwmw2@infradead.org>
Thu, 1 Jul 2021 20:46:06 +0000 (21:46 +0100)
Using vhost makes high-volume transfers go nice and fast, especially
we are using 100% of a CPU in the single-threaded OpenConnect process
and just offloading the kernel←→user copies for the tun packets to
the vhost thread instead of having to do them from our single thread
too.

However, for a lightly used link with *occasional* packets, which is
fairly much the definition of a VPN being used for VoIP, it adds a lot
of unwanted latency. If our userspace thread is otherwise going to be
*idle*, and fall back into select() to wait for something else to do,
then we might as well just write the packet *directly* to the tun
device.

So... when the queue is stopped and would need to be kicked, and if
there are only a *few* (heuristic: half max_qlen) packets on the
queue to be sent, just send them directly.

Signed-off-by: David Woodhouse <dwmw2@infradead.org>
vhost.c

diff --git a/vhost.c b/vhost.c
index e0593f66d84e88062f6ac7479b923296bb38851b..f5414496c0b3ef4a87c25b485e848db212664146 100644 (file)
--- a/vhost.c
+++ b/vhost.c
@@ -392,6 +392,23 @@ static inline int process_ring(struct openconnect_info *vpninfo, int tx, uint64_
                        this = dequeue_packet(&vpninfo->incoming_queue);
                        if (!this)
                                break;
+
+                       /* If only a few packets on the queue, just send them
+                        * directly. The latency is much better. We benefit from
+                        * vhost-net TX when we're overloaded and want to use all
+                        * our CPU on the RX and crypto; there's not a lot of point
+                        * otherwise. */
+                       if (!*kick && vpninfo->incoming_queue.count < vpninfo->max_qlen / 2 &&
+                           next_avail == (&ring->used->flags)[2 + (vpninfo->vhost_ring_size * 4)]) {
+                               if (!os_write_tun(vpninfo, this)) {
+                                       vpninfo->stats.rx_pkts++;
+                                       vpninfo->stats.rx_bytes += this->len;
+
+                                       free_pkt(vpninfo, this);
+                                       continue;
+                               }
+                               /* Failed! Pretend it never happened; queue for vhost */
+                       }
                        memset(&this->virtio.h, 0, sizeof(this->virtio.h));
                } else {
                        int len = vpninfo->ip_info.mtu;