Vladimir Oltean [Fri, 22 Sep 2023 13:31:07 +0000 (15:31 +0200)]
net: dsa: microchip: move REG_SW_MAC_ADDR to dev->info->regs[]
Defining macros which have the same name but different values is bad
practice, because it makes it hard to avoid code duplication. The same
code does different things, depending on the file it's placed in.
Case in point, we want to access REG_SW_MAC_ADDR from ksz_common.c, but
currently we can't, because we don't know which kszXXXX_reg.h to include
from the common code.
Remove the REG_SW_MAC_ADDR_{0..5} macros from ksz8795_reg.h and
ksz9477_reg.h, and re-add this register offset to the dev->info->regs[]
array.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
net: dsa: tag_ksz: Extend ksz9477_xmit() for HSR frame duplication
The KSZ9477 has support for HSR (High-Availability Seamless Redundancy).
One of its offloading (i.e. performed in the switch IC hardware) features
is to duplicate received frame to both HSR aware switch ports.
To achieve this goal - the tail TAG needs to be modified. To be more
specific, both ports must be marked as destination (egress) ones.
The NETIF_F_HW_HSR_DUP flag indicates that the device supports HSR and
assures (in HSR core code) that frame is sent only once from HOST to
switch with tail tag indicating both ports.
Signed-off-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Vladimir Oltean [Fri, 22 Sep 2023 13:31:05 +0000 (15:31 +0200)]
net: dsa: notify drivers of MAC address changes on user ports
In some cases, drivers may need to veto the changing of a MAC address on
a user port. Such is the case with KSZ9477 when it offloads a HSR device,
because it programs the MAC address of multiple ports to a shared
hardware register. Those ports need to have equal MAC addresses for the
lifetime of the HSR offload.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Vladimir Oltean [Fri, 22 Sep 2023 13:31:04 +0000 (15:31 +0200)]
net: dsa: propagate extack to ds->ops->port_hsr_join()
Drivers can provide meaningful error messages which state a reason why
they can't perform an offload, and dsa_slave_changeupper() already has
the infrastructure to propagate these over netlink rather than printing
to the kernel log. So pass the extack argument and modify the xrs700x
driver's port_hsr_join() prototype.
Also take the opportunity and use the extack for the 2 -EOPNOTSUPP cases
from xrs700x_hsr_join().
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Add a quirk for a copper SFP that identifies itself as "FS" "SFP-2.5G-T".
This module's PHY is inaccessible, and can only run at 2500base-X with the
host without negotiation. Add a quirk to enable the 2500base-X interface mode
with 2500base-T support and disable auto negotiation.
'n_tables' is small, UDP_TUNNEL_NIC_MAX_TABLES = 4 as a maximum. So there
is no real point to allocate the 'entries' pointers array with a dedicate
memory allocation.
Using a flexible array for struct udp_tunnel_nic->entries avoids the
overhead of an additional memory allocation.
This also saves an indirection when the array is accessed.
Finally, __counted_by() can be used for run-time bounds checking if
configured and supported by the compiler.
As we don't specify the MTU in the driver, the framework
will fall back to 1500 bytes and this doesn't work very
well when we try to attach a DSA switch:
eth1: mtu greater than device maximum
ixp4xx_eth c800a000.ethernet eth1: error -22 setting
MTU to 1504 to include DSA overhead
I checked the developer docs and the hardware can actually
do really big frames, so update the driver accordingly.
Paolo Abeni [Tue, 3 Oct 2023 08:05:24 +0000 (10:05 +0200)]
Merge branch 'tcp_metrics-four-fixes'
Eric Dumazet says:
====================
tcp_metrics: four fixes
Looking at an inconclusive syzbot report, I was surprised
to see that tcp_metrics cache on my host was full of
useless entries, even though I have
/proc/sys/net/ipv4/tcp_no_metrics_save set to 1.
While looking more closely I found a total of four issues.
====================
Eric Dumazet [Fri, 22 Sep 2023 22:03:55 +0000 (22:03 +0000)]
tcp_metrics: do not create an entry from tcp_init_metrics()
tcp_init_metrics() only wants to get metrics if they were
previously stored in the cache. Creating an entry is adding
useless costs, especially when tcp_no_metrics_save is set.
Fixes: 51c5d0c4b169 ("tcp: Maintain dynamic metrics in local cache.") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Eric Dumazet [Fri, 22 Sep 2023 22:03:54 +0000 (22:03 +0000)]
tcp_metrics: properly set tp->snd_ssthresh in tcp_init_metrics()
We need to set tp->snd_ssthresh to TCP_INFINITE_SSTHRESH
in the case tcp_get_metrics() fails for some reason.
Fixes: 9ad7c049f0f7 ("tcp: RFC2988bis + taking RTT sample from 3WHS for the passive open side") Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
====================
Fix implicit sign conversions in handshake upcall
An internal static analysis tool noticed some implicit sign
conversions for some of the arguments in the handshake upcall
protocol.
====================
====================
mlxsw: Annotate structs with __counted_by
This annotates several mlxsw structures with the coming __counted_by attribute
for bounds checking of flexible arrays at run-time. For more details, see
commit dd06e72e68bc ("Compiler Attributes: Add __counted_by macro").
====================
mlxsw: spectrum_span: Annotate struct mlxsw_sp_span with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_span.
mlxsw: spectrum_router: Annotate struct mlxsw_sp_nexthop_group_info with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_nexthop_group_info.
mlxsw: spectrum: Annotate struct mlxsw_sp_counter_pool with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mlxsw_sp_counter_pool.
mlxsw: core: Annotate struct mlxsw_env with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mlxsw_env.
mlxsw: Annotate struct mlxsw_linecards with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mlxsw_linecards.
====================
Batch 1: Annotate structs with __counted_by
This is the batch 1 of patches touching netdev for preparing for
the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by to structs that would
benefit from the annotation.
Since the element count member must be set before accessing the annotated
flexible array member, some patches also move the member's initialization
earlier. (These are noted in the individual patches.)
net: tulip: Annotate struct mediatable with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mediatable.
net: openvswitch: Annotate struct dp_meter with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct dp_meter.
net: enetc: Annotate struct enetc_psfp_gate with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct enetc_psfp_gate.
net: openvswitch: Annotate struct dp_meter_instance with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct dp_meter_instance.
net: mana: Annotate struct hwc_dma_buf with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct hwc_dma_buf.
net: ipa: Annotate struct ipa_power with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct ipa_power.
net: mana: Annotate struct mana_rxq with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct mana_rxq.
net: hisilicon: Annotate struct rcb_common_cb with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct rcb_common_cb.
net: enetc: Annotate struct enetc_int_vector with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct enetc_int_vector.
net: hns: Annotate struct ppe_common_cb with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct ppe_common_cb.
ipv6: Annotate struct ip6_sf_socklist with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct ip6_sf_socklist.
ipv4/igmp: Annotate struct ip_sf_socklist with __counted_by
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct ip_sf_socklist.
Cc: Martin KaFai Lau <martin.lau@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Kees Cook <keescook@chromium.org> Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org> Link: https://lore.kernel.org/r/20230922172858.3822653-2-keescook@chromium.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Prepare for the coming implementation by GCC and Clang of the __counted_by
attribute. Flexible array members annotated with __counted_by can have
their accesses bounds-checked at run-time checking via CONFIG_UBSAN_BOUNDS
(for array indexing) and CONFIG_FORTIFY_SOURCE (for strcpy/memcpy-family
functions).
As found with Coccinelle[1], add __counted_by for struct fib_info.
David S. Miller [Mon, 2 Oct 2023 07:07:13 +0000 (08:07 +0100)]
Merge branch 'mlxsw-next'
Petr Machata says:
====================
mlxsw: Provide enhancements and new feature
Vadim Pasternak writes:
Patch #1 - Optimize transaction size for efficient retrieval of module
data.
Patch #3 - Enable thermal zone binding with new cooling device.
Patch #4 - Employ standard macros for dividing buffer into the chunks.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
mlxsw: i2c: Utilize standard macros for dividing buffer into chunks
Use standard macro DIV_ROUND_UP() to determine the number of chunks
required for a given buffer.
Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
mlxsw: core: Extend allowed list of external cooling devices for thermal zone binding
Extend the list of allowed external cooling devices for thermal zone
binding to include devices of type "emc2305".
The motivation is to provide support for the system SN2201, which is
equipped with the Spectrum-1 ASIC.
The system's airflow control is managed by the EMC2305 RPM-based PWM
Fan Speed Controller as the cooling device.
Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
mlxsw: reg: Limit MTBR register payload to a single data record
The MTBR register is used to read temperatures from multiple sensors in
one transaction, but the driver only reads from a single sensor in each
transaction.
Rrestrict the payload size of the MTBR register to prevent the
transmission of redundant data to the firmware.
Signed-off-by: Vadim Pasternak <vadimp@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 21 Sep 2023 20:28:15 +0000 (20:28 +0000)]
net: implement lockless SO_MAX_PACING_RATE
SO_MAX_PACING_RATE setsockopt() does not need to hold
the socket lock, because sk->sk_pacing_rate readers
can run fine if the value is changed by other threads,
after adding READ_ONCE() accessors.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 21 Sep 2023 20:28:11 +0000 (20:28 +0000)]
net: implement lockless SO_PRIORITY
This is a followup of 8bf43be799d4 ("net: annotate data-races
around sk->sk_priority").
sk->sk_priority can be read and written without holding the socket lock.
Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Wenjia Zhang <wenjia@linux.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
openvswitch: reduce stack usage in do_execute_actions
do_execute_actions() function can be called recursively multiple
times while executing actions that require pipeline forking or
recirculations. It may also be re-entered multiple times if the packet
leaves openvswitch module and re-enters it through a different port.
Currently, there is a 256-byte array allocated on stack in this
function that is supposed to hold NSH header. Compilers tend to
pre-allocate that space right at the beginning of the function:
a88: 48 81 ec b0 01 00 00 sub $0x1b0,%rsp
NSH is not a very common protocol, but the space is allocated on every
recursive call or re-entry multiplying the wasted stack space.
Move the stack allocation to push_nsh() function that is only used
if NSH actions are actually present. push_nsh() is also a simple
function without a possibility for re-entry, so the stack is returned
right away.
With this change the preallocated space is reduced by 256 B per call:
b18: 48 81 ec b0 00 00 00 sub $0xb0,%rsp
Signed-off-by: Ilya Maximets <i.maximets@ovn.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Eelco Chaudron echaudro@redhat.com Reviewed-by: Aaron Conole <aconole@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 21 Sep 2023 08:52:17 +0000 (08:52 +0000)]
virtio_net: avoid data-races on dev->stats fields
Use DEV_STATS_INC() and DEV_STATS_READ() which provide
atomicity on paths that can be used concurrently.
Reported-by: syzbot <syzkaller@googlegroups.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
This patch extends flower offload support for MPLS protocol.
Due to hardware limitation, currently driver supports lse
depth up to 4.
Signed-off-by: Hariprasad Kelam <hkelam@marvell.com> Signed-off-by: Sunil Goutham <sgoutham@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: xilinx: Drop kernel doc comment about return value
During review of the patch that became 2e0ec0afa902 ("net: ethernet:
xilinx: Convert to platform remove callback returning void") in
net-next, Radhey Shyam Pandey pointed out that the change makes the
documentation about the return value obsolete. The patch was applied
without addressing this feedback, so here comes a fix in a separate
patch.
Fixes: 2e0ec0afa902 ("net: ethernet: xilinx: Convert to platform remove callback returning void") Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> Reviewed-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Switch to napi_consume_skb() to take advantage of bulk free, and skb
reuse through skb cache in conjunction with napi_build_skb().
When parameter 'budget' = 0, indicating non-NAPI context,
dev_consume_skb_any() is called internally.
Signed-off-by: Sieng-Piaw Liew <liew.s.piaw@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 1 Oct 2023 12:20:36 +0000 (13:20 +0100)]
Merge branch 'sch_fq-improvements'
Eric Dumazet says:
====================
net_sched: sch_fq: round of improvements
For FQ tenth anniversary, it was time for making it faster.
The FQ part (as in Fair Queue) is rather expensive, because
we have to classify packets and store them in a per-flow structure,
and add this per-flow structure in a hash table. Then the RR lists
also add cache line misses.
Most fq qdisc are almost idle. Trying to share NIC bandwidth has
no benefits, thus the qdisc could behave like a FIFO.
This series brings a 5 % throughput increase in intensive
tcp_rr workload, and 13 % increase for (unpaced) UDP packets.
v2: removed an extra label (build bot).
Fix an accidental increase of stat_internal_packets counter
in fast path.
Added "constify qdisc_priv()" patch to allow fq_fastpath_check()
first parameter to be const.
typo on 'eligible' (Willem)
====================
Reviewed-by: Jamal Hadi Salim <jhs@mojatatu.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 20 Sep 2023 20:17:15 +0000 (20:17 +0000)]
net_sched: sch_fq: always garbage collect
FQ performs garbage collection at enqueue time, and only
if number of flows is above a given threshold, which
is hit after the qdisc has been used a bit.
Since an RB-tree traversal is needed to locate a flow,
it makes sense to perform gc all the time, to keep
rb-trees smaller.
This reduces by 50 % average storage costs in FQ,
and avoids 1 cache line miss at enqueue time when
fast path added in prior patch can not be used.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Wed, 20 Sep 2023 20:17:14 +0000 (20:17 +0000)]
net_sched: sch_fq: add fast path for mostly idle qdisc
TCQ_F_CAN_BYPASS can be used by few qdiscs.
Idea is that if we queue a packet to an empty qdisc,
following dequeue() would pick it immediately.
FQ can not use the generic TCQ_F_CAN_BYPASS code,
because some additional checks need to be performed.
This patch adds a similar fast path to FQ.
Most of the time, qdisc is not throttled,
and many packets can avoid bringing/touching
at least four cache lines, and consuming 128bytes
of memory to store the state of a flow.
After this patch, netperf can send UDP packets about 13 % faster,
and pktgen goes 30 % faster (when FQ is in the way), on a fast NIC.
TCP traffic is also improved, thanks to a reduction of cache line misses.
I have measured a 5 % increase of throughput on a tcp_rr intensive workload.
Eric Dumazet [Wed, 20 Sep 2023 17:29:43 +0000 (17:29 +0000)]
tcp: derive delack_max from rto_min
While BPF allows to set icsk->->icsk_delack_max
and/or icsk->icsk_rto_min, we have an ip route
attribute (RTAX_RTO_MIN) to be able to tune rto_min,
but nothing to consequently adjust max delayed ack,
which vary from 40ms to 200 ms (TCP_DELACK_{MIN|MAX}).
This makes RTAX_RTO_MIN of almost no practical use,
unless customers are in big trouble.
Modern days datacenter communications want to set
rto_min to ~5 ms, and the max delayed ack one jiffie
smaller to avoid spurious retransmits.
After this patch, an "rto_min 5" route attribute will
effectively lower max delayed ack timers to 4 ms.
Note in the following ss output, "rto:6 ... ato:4"
While we could argue this patch fixes a bug with RTAX_RTO_MIN,
I do not add a Fixes: tag, so that we can soak it a bit before
asking backports to stable branches.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sat, 30 Sep 2023 17:49:14 +0000 (18:49 +0100)]
Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
ice: add PTP auxiliary bus support
Michal Michalik says:
Auxiliary bus allows exchanging information between PFs, which allows
both fixing problems and simplifying new features implementation.
The auxiliary bus is enabled for all devices supported by ice driver.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
pktgen: Introducing 'SHARED' flag for testing with non-shared skb
Currently, skbs generated by pktgen always have their reference count
incremented before transmission, causing their reference count to be
always greater than 1, leading to two issues:
1. Only the code paths for shared skbs can be tested.
2. In certain situations, skbs can only be released by pktgen.
To enhance testing comprehensiveness, we are introducing the "SHARED"
flag to indicate whether an SKB is shared. This flag is enabled by
default, aligning with the current behavior. However, disabling this
flag allows skbs with a reference count of 1 to be transmitted.
So we can test non-shared skbs and code paths where skbs are released
within the stack.
pktgen: Automate flag enumeration for unknown flag handling
When specifying an unknown flag, it will print all available flags.
Currently, these flags are provided as fixed strings, which requires
manual updates when flags change. Replacing it with automated flag
enumeration.
As the number of tdc tests is growing, so is our completion wall time.
One of the ideas to improve this is to run tests in parallel, as they
are self contained.
This series allows for tests to run in parallel, in batches of 32 tests.
Not all tests can run in parallel as they might conflict with each other.
The code will still honor this requirement even when trying to run the
tests over the worker pool.
In order to make this happen we had to localize the test resources
(patches 1 and 2), where instead of having all tests sharing one single
namespace and veths devices each test now gets it's own local namespace and devices.
Even though the tests serialize over rtnl_lock in the kernel, we
measured a speedup of about 3x in a test VM.
====================
Pedro Tammela [Tue, 19 Sep 2023 13:54:03 +0000 (10:54 -0300)]
selftests/tc-testing: implement tdc parallel test run
Use a Python process pool to run the tests in parallel.
Not all tests can run in parallel, for instance tests that are not
namespaced and tests that use netdevsim, as they can conflict with one
another.
The code logic will split the tests into serial and parallel.
For the parallel tests, we build batches of 32 tests and queue each
batch on the process pool. For the serial tests, they are queued as a
whole into the process pool, which in turn executes them concurrently
with the parallel tests.
Even though the tests serialize on rtnl_lock in the kernel, this feature
showed results with a ~3x speedup on the wall time for the entire test suite
running in a VM:
Before - 4m32.502s
After - 1m19.202s
Examples:
In order to run tdc using 4 processes:
./tdc.py -J4 <...>
In order to run tdc using 1 process:
./tdc.py -J1 <...> || ./tdc.py <...>
Note that the kernel configuration will affect the speed of the tests,
especially if such configuration slows down process creation and/or
fork().
Tested-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Pedro Tammela [Tue, 19 Sep 2023 13:54:02 +0000 (10:54 -0300)]
selftests/tc-testing: update test definitions for local resources
With resources localized on a per test basis, some tests definitions
either contain redundant commands, were wrong or could be simplified.
Update all of them to match the new requirements.
Tested-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Pedro Tammela [Tue, 19 Sep 2023 13:54:01 +0000 (10:54 -0300)]
selftests/tc-testing: localize test resources
As of today, the current tdc architecture creates one netns and uses it
to run all tests. This assumption was embedded into the nsPlugin which
carried over as how the tests were written.
The tdc tests are by definition self contained and can,
theoretically, run in parallel. Even though in the kernel they will
serialize over the rtnl lock, we should expect a significant speedup of the
total wall time for the entire test suite, which is hitting close to
1100 tests at this point.
A first step to achieve this goal is to remove sharing of global resources like
veth/dummy interfaces and the netns. In this patch we 'localize' these
resources on a per test basis. Each test gets it's own netns, VETH/dummy interfaces.
The resources are spawned in the pre_suite phase, where tdc will prepare
all netns and interfaces for all tests. This is done in order to avoid
concurrency issues with netns / interfaces spawning and commands using
them. As tdc progresses, the resources are deleted after each test finishes
executing.
Tests that don't use the nsPlugin still run under the root namespace,
but are now required to manage any external resources like interfaces.
These cannot be parallelized as their definition doesn't allow it.
On the other hand, when using the nsPlugin, tests don't need to create
dummy/veth interfaces as these are handled already.
Tested-by: Davide Caratti <dcaratti@redhat.com> Signed-off-by: Pedro Tammela <pctammela@mojatatu.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
David S. Miller [Fri, 22 Sep 2023 07:26:30 +0000 (08:26 +0100)]
Merge branch 'mlxsw-multicast'
Petr Machata says:
====================
mlxsw: Improve blocks selection for IPv6 multicast forwarding
Amit Cohen writes:
The driver configures two ACL regions during initialization, these regions
are used for IPv4 and IPv6 multicast forwarding. Entries residing in these
two regions match on the {SIP, DIP, VRID} key elements.
Currently for IPv6 region, 9 key blocks are used. This can be improved by
reducing the amount key blocks needed for the IPv6 region to 8. It is
possible to use key blocks that mix subsets of the VRID element with
subsets of the DIP element.
To make this happen, we have to take in account the algorithm that chooses
which key blocks will be used. It is lazy and not the optimal one as it is
a complex task. It searches the block that contains the most elements that
are required, chooses it, removes the elements that appear in the chosen
block and starts again searching the block that contains the most elements.
To optimize the nubmber of the blocks for IPv6 multicast forwarding, handle
the following:
1. Add support for key blocks that mix subsets of the VRID element with
subsets of the DIP element.
2. Prevent the algorithm from chosing another blocks for VRID.
Currently, we have the block 'ipv4_4' which contains 2 sub-elements of
VRID. With the existing algorithm, this block might be chosen, then 8
blocks must be chosen for SIP and DIP and we will get 9 blocks to match on
{SIP, DIP, VRID}. Therefore, replace this block with a new block 'ipv4_5'
that contains 1 element for VRID, this will not be chosen for IPv6 as VRID
element will be broken to several sub-elements. In this way we can get 8
blocks for IPv6 multicast forwarding.
This improvement was tested and indeed 8 blocks are used instead of 9.
v2:
- Resending without changes.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Tue, 19 Sep 2023 15:42:56 +0000 (17:42 +0200)]
mlxsw: Edit IPv6 key blocks to use one less block for multicast forwarding
Two ACL regions that are configured by the driver during initialization are
the ones used for IPv4 and IPv6 multicast forwarding. Entries residing
in these two regions match on the {SIP, DIP, VRID} key elements.
Currently for IPv6 region, 9 key blocks are used:
* 4 for SIP - 'ipv4_1', 'ipv6_{3,4,5}'
* 4 for DIP - 'ipv4_0', 'ipv6_{0,1,2/2b}'
* 1 for VRID - 'ipv4_4b'
This can be improved by reducing the amount key blocks needed for
the IPv6 region to 8. It is possible to use key blocks that mix subsets of
the VRID element with subsets of the DIP element.
The following key blocks can be used:
* 4 for SIP - 'ipv4_1', 'ipv6_{3,4,5}'
* 1 for subset of DIP - 'ipv4_0'
* 3 for the rest of DIP and subsets of VRID - 'ipv6_{0,1,2/2b}'
To make this happen, add VRID sub-elements as part of existing keys -
'ipv6_{0,1,2/2b}'. Note that one of the sub-elements is called
VRID_ROUTER_MSB and does not contain bit numbers like the rest, as for
Spectrum < 4 this element represents bits 8-10 and for Spectrum-4 it
represents bits 8-11.
Breaking VRID into 3 sub-elements makes the driver use one less block in
IPv6 region for multicast forwarding. The sub-elements can be filled in
blocks that are used for destination IP.
The algorithm in the driver that chooses which key blocks will be used is
lazy and not the optimal one. It searches the block that contains the most
elements that are required, chooses it, removes the elements that appear
in the chosen block and starts again searching the block that contains the
most elements.
When key block 'ipv4_4' is defined, the algorithm might choose it, as it
contains 2 sub-elements of VRID, then 8 blocks must be chosen for SIP and
DIP and we get 9 blocks to match on {SIP, DIP, VRID}. That is why we had to
remove key block 'ipv4_4' in a previous patch and use key block that
contains one field for VRID.
This improvement was tested and indeed 8 blocks are used instead of 9.
Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
The previous patch replaced the key block 'ipv4_4' with 'ipv4_5'. The
corresponding block for Spectrum-4 is 'ipv4_4b'. To be consistent, replace
key block 'ipv4_4b' with 'ipv4_5b'.
Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Amit Cohen [Tue, 19 Sep 2023 15:42:54 +0000 (17:42 +0200)]
mlxsw: Add 'ipv4_5' flex key
Currently virtual router ID element is broken to two sub-elements -
'VIRT_ROUTER_LSB' and 'VIRT_ROUTER_MSB'. It was broken as this field is
broken in 'ipv4_4' flex key which is used for IPv4 in Spectrum < 4.
For Spectrum-4, we use 'ipv4_4b' flex key which contains one field for
virtual router, this key is not supported in older ASICs.
Add 'ipv4_5' flex key which is supported in all ASICs and contains one
field for virtual router. Then there is no reason to use 'VIRT_ROUTER_LSB'
and 'VIRT_ROUTER_MSB', remove them and add one element 'VIRT_ROUTER' for
this field.
The motivation is to get rid of 'ipv4_4' flex key, as it might be chosen
for IPv6 multicast forwarding region. This will not allow the improvement
in a following patch. See more details in the cover letter and in a
following patch.
Signed-off-by: Amit Cohen <amcohen@nvidia.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Signed-off-by: Petr Machata <petrm@nvidia.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Peter Lafreniere [Tue, 19 Sep 2023 14:14:23 +0000 (14:14 +0000)]
hamradio: baycom: remove useless link in Kconfig
The Kconfig help text for baycom drivers suggests that more information
on the hardware can be found at <https://www.baycom.de>. The website now
includes no information on their ham radio products other than a mention
that they were once produced by the company, saying:
"The amateur radio equipment is now no longer part and business of BayCom GmbH"
As there is no information relavent to the baycom driver on the site,
remove the link.
Signed-off-by: Peter Lafreniere <peter@n8pjl.ca> Signed-off-by: David S. Miller <davem@davemloft.net>
- netfilter:
- fix several GC related issues
- fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP
- eth: team: fix null-ptr-deref when team device type is changed
- eth: i40e: fix VF VLAN offloading when port VLAN is configured
- eth: ionic: fix 16bit math issue when PAGE_SIZE >= 64KB
Previous releases - always broken:
- core: fix ETH_P_1588 flow dissector
- mptcp: fix several connection hang-up conditions
- bpf:
- avoid deadlock when using queue and stack maps from NMI
- add override check to kprobe multi link attach
- hsr: properly parse HSRv1 supervisor frames.
- eth: igc: fix infinite initialization loop with early XDP redirect
- eth: octeon_ep: fix tx dma unmap len values in SG
- eth: hns3: fix GRE checksum offload issue"
* tag 'net-6.6-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (87 commits)
sfc: handle error pointers returned by rhashtable_lookup_get_insert_fast()
igc: Expose tx-usecs coalesce setting to user
octeontx2-pf: Do xdp_do_flush() after redirects.
bnxt_en: Flush XDP for bnxt_poll_nitroa0()'s NAPI
net: ena: Flush XDP packets on error.
net/handshake: Fix memory leak in __sock_create() and sock_alloc_file()
net: hinic: Fix warning-hinic_set_vlan_fliter() warn: variable dereferenced before check 'hwdev'
netfilter: ipset: Fix race between IPSET_CMD_CREATE and IPSET_CMD_SWAP
netfilter: nf_tables: fix memleak when more than 255 elements expired
netfilter: nf_tables: disable toggling dormant table state more than once
vxlan: Add missing entries to vxlan_get_size()
net: rds: Fix possible NULL-pointer dereference
team: fix null-ptr-deref when team device type is changed
net: bridge: use DEV_STATS_INC()
net: hns3: add 5ms delay before clear firmware reset irq source
net: hns3: fix fail to delete tc flower rules during reset issue
net: hns3: only enable unicast promisc when mac table full
net: hns3: fix GRE checksum offload issue
net: hns3: add cmdq check for vf periodic service task
net: stmmac: fix incorrect rxq|txq_stats reference
...
Merge tag 'v6.6-rc3.vfs.ctime.revert' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs
Pull finegrained timestamp reverts from Christian Brauner:
"Earlier this week we sent a few minor fixes for the multi-grained
timestamp work in [1]. While we were polishing those up after Linus
realized that there might be a nicer way to fix them we received a
regression report in [2] that fine grained timestamps break gnulib
tests and thus possibly other tools.
The kernel will elide fine-grain timestamp updates when no one is
actively querying for them to avoid performance impacts. So a sequence
like write(f1) stat(f2) write(f2) stat(f2) write(f1) stat(f1) may
result in timestamp f1 to be older than the final f2 timestamp even
though f1 was last written too but the second write didn't update the
timestamp.
Such plotholes can lead to subtle bugs when programs compare
timestamps. For example, the nap() function in [2] will estimate that
it needs to wait one ns on a fine-grain timestamp enabled filesytem
between subsequent calls to observe a timestamp change. But in general
we don't update timestamps with more than one jiffie if we think that
no one is actively querying for fine-grain timestamps to avoid
performance impacts.
While discussing various fixes the decision was to go back to the
drawing board and ultimately to explore a solution that involves only
exposing such fine-grained timestamps to nfs internally and never to
userspace.
As there are multiple solutions discussed the honest thing to do here
is not to fix this up or disable it but to cleanly revert. The general
infrastructure will probably come back but there is no reason to keep
this code in mainline.
The general changes to timestamp handling are valid and a good cleanup
that will stay. The revert is fully bisectable"
Merge tag 'powerpc-6.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux
Pull powerpc fixes from Michael Ellerman:
- A fix for breakpoint handling which was using get_user() while atomic
- Fix the Power10 HASHCHK handler which was using get_user() while
atomic
- A few build fixes for issues caused by recent changes
Thanks to Benjamin Gray, Christophe Leroy, Kajol Jain, and Naveen N Rao.
* tag 'powerpc-6.6-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/dexcr: Move HASHCHK trap handler
powerpc/82xx: Select FSL_SOC
powerpc: Fix build issue with LD_DEAD_CODE_DATA_ELIMINATION and FTRACE_MCOUNT_USE_PATCHABLE_FUNCTION_ENTRY
powerpc/watchpoints: Annotate atomic context in more places
powerpc/watchpoint: Disable pagefaults when getting user instruction
powerpc/watchpoints: Disable preemption in thread_change_pc()
powerpc/perf/hv-24x7: Update domain value check