Jakub Kicinski [Wed, 26 Jun 2024 01:24:55 +0000 (18:24 -0700)]
selftests: drv-net: add ability to wait for at least N packets to load gen
Teach the load generator how to wait for at least given number
of packets to be received. This will be useful for filtering
where we'll want to send a non-trivial number of packets and
make sure they landed in right queues.
Reviewed-by: Breno Leitao <leitao@debian.org> Reviewed-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240626012456.2326192-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 26 Jun 2024 01:24:53 +0000 (18:24 -0700)]
selftests: drv-net: try to check if port is in use
We use random ports for communication. As Willem predicted
this leads to occasional failures. Try to check if port is
already in use by opening a socket and binding to that port.
Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Reviewed-by: Petr Machata <petrm@nvidia.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240626012456.2326192-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
mlxsw: Reduce memory footprint of mlxsw driver
Amit Cohen writes:
A previous patch-set used page pool to allocate buffers, to simplify the
change, we first used one continuous buffer, which was allocated with
order > 0. This set improves page pool usage to allocate the exact number
of pages which are required for packet.
This change requires using fragmented SKB, till now all the buffer was in
the linear part. Note that 'skb->truesize' is decreased for small packets.
This set significantly reduces memory consumption of mlxsw driver. The
footprint is reduced by 26%.
Patch set overview:
Patch #1 calculates number of scatter/gather entries and stores the value
Patch #2 converts the driver to use fragmented buffers
====================
Amit Cohen [Tue, 25 Jun 2024 13:47:35 +0000 (15:47 +0200)]
mlxsw: pci: Use fragmented buffers
WQE (Work Queue Element) includes 3 scatter/gather entries for buffers.
The buffer can be split into 3 parts, software should set address and byte
count of each part.
A previous patch-set used page pool to allocate buffers, to simplify the
change, we first used one continuous buffer, which was allocated with
order > 0. This patch improves page pool usage to allocate the exact
number of pages which are required for packet.
As part of init, fill WQE.address[x] and WQE.byte_count* with pages which
are allocated from the pool. Fill x entries according to number of
scatter/gather entries which are required for maximum packet size. When a
packet is received, check the actual size and replace only the used pages.
Save bytes for software overhead only as part of the first entry.
This change also requires using fragmented SKB, till now all the buffer
was in the linear part. Note that 'skb->truesize' is decreased for small
packets.
For now the maximum buffer size is 3 * PAGE_SIZE which is enough, in
case that the driver will support larger MTU, we can use 'order' to
allocate more than one page per scatter/gather entry.
This change significantly reduces memory consumption of mlxsw driver. The
footprint is reduced by 26%.
Amit Cohen [Tue, 25 Jun 2024 13:47:34 +0000 (15:47 +0200)]
mlxsw: pci: Store number of scatter/gather entries for maximum packet size
A previous patch-set used page pool for Rx buffers allocations. To
simplify the change, we first used page pool for one allocation per
packet - one continuous buffer is allocated for each packet. This can be
improved by using fragmented buffers, then memory consumption will be
significantly reduced.
WQE (Work Queue Element) includes up to 3 scatter/gather entries for
data. As preparation for fragmented buffer usage, calculate number of
scatter/gather entries which are required for packet according to
maximum MTU and store it for future use. For now use PAGE_SIZE for each
entry, which means that maximum buffer size is 3 * PAGE_SIZE. This is
enough for the maximum MTU which is supported in the driver now (10K).
Warn in an unlikely case of maximum MTU which requires more than 3 pages,
for now this warn should not happen with standard page size (>=4K) and
maximum MTU (10K).
Uwe Kleine-König [Tue, 25 Jun 2024 08:38:53 +0000 (10:38 +0200)]
net: Drop explicit initialization of struct i2c_device_id::driver_data to 0
These drivers don't use the driver_data member of struct i2c_device_id,
so don't explicitly initialize this member.
This prepares putting driver_data in an anonymous union which requires
either no initialization or named designators. But it's also a nice
cleanup on its own.
While add it, also remove commas after the sentinel entries.
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@baylibre.com> Reviewed-by: Petr Machata <petrm@nvidia.com> # For mlxsw Reviewed-by: Kory Maincent <Kory.maincent@bootlin.com> Reviewed-by: Jeremy Kerr <jk@codeconstruct.com.au> # for mctp-i2c Reviewed-by: Oleksij Rempel <o.rempel@pengutronix.de> Acked-by: Oleksij Rempel <o.rempel@pengutronix.de> Link: https://patch.msgid.link/20240625083853.2205977-2-u.kleine-koenig@baylibre.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
add ethernet driver for Tehuti Networks TN40xx chips
This patchset adds a new 10G ethernet driver for Tehuti Networks
TN40xx chips. Note in mainline, there is a driver for Tehuti Networks
(drivers/net/ethernet/tehuti/tehuti.[hc]), which supports TN30xx
chips.
Multiple vendors (DLink, Asus, Edimax, QNAP, etc) developed adapters
based on TN40xx chips. Tehuti Networks went out of business but the
drivers are still distributed under GPL2 with some of the hardware
(and also available on some sites). With some changes, I try to
upstream this driver with a new PHY driver in Rust.
The major change is replacing the PHY abstraction layer in the original
driver with phylink. TN40xx chips are used with various PHY hardware
(AMCC QT2025, TI TLK10232, Aqrate AQR105, and Marvell MV88X3120,
MV88X3310, and MV88E2010).
I've also been working on a new PHY driver for QT2025 in Rust [1]. For
now, I enable only adapters using QT2025 PHY in the PCI ID table of
this driver. I've tested this driver and the QT2025 PHY driver with
Edimax EN-9320 10G adapter and 10G-SR SFP+. In mainline, there are PHY
drivers for AQR105 and Marvell PHYs, which could work for some TN40xx
adapters with this driver.
To make reviewing easier, this patchset has only basic functions. Once
merged, I'll submit features like ethtool support.
FUJITA Tomonori [Sun, 23 Jun 2024 23:55:07 +0000 (08:55 +0900)]
net: tn40xx: add phylink support
This patch adds supports for multiple PHY hardware with phylink. The
adapters with TN40xx chips use multiple PHY hardware; AMCC QT2025, TI
TLK10232, Aqrate AQR105, and Marvell 88X3120, 88X3310, and MV88E2010.
For now, the PCI ID table of this driver enables adapters using only
QT2025 PHY. I've tested this driver and the QT2025 PHY driver (SFP+
10G SR) with Edimax EN-9320 10G adapter.
Signed-off-by: FUJITA Tomonori <fujita.tomonori@gmail.com> Reviewed-by: Hans-Frieder Vogt <hfdevel@gmx.net> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://patch.msgid.link/20240623235507.108147-8-fujita.tomonori@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
FUJITA Tomonori [Sun, 23 Jun 2024 23:55:05 +0000 (08:55 +0900)]
net: tn40xx: add basic Rx handling
This patch adds basic Rx handling. The Rx logic uses three major data
structures; two ring buffers with NIC and one database. One ring
buffer is used to send information to NIC about memory to be stored
packets to be received. The other is used to get information from NIC
about received packets. The database is used to keep the information
about DMA mapping. After a packet arrived, the db is used to pass the
packet to the network stack.
FUJITA Tomonori [Sun, 23 Jun 2024 23:55:04 +0000 (08:55 +0900)]
net: tn40xx: add basic Tx handling
This patch adds device specific structures to initialize the hardware
with basic Tx handling. The original driver loads the embedded
firmware in the header file. This driver is implemented to use the
firmware APIs.
The Tx logic uses three major data structures; two ring buffers with
NIC and one database. One ring buffer is used to send information
about packets to be sent for NIC. The other is used to get information
from NIC about packet that are sent. The database is used to keep the
information about DMA mapping. After a packet is sent, the db is used
to free the resource used for the packet.
FUJITA Tomonori [Sun, 23 Jun 2024 23:55:01 +0000 (08:55 +0900)]
PCI: Add Edimax Vendor ID to pci_ids.h
Add the Edimax Vendor ID (0x1432) for an ethernet driver for Tehuti
Networks TN40xx chips. This ID can be used for Realtek 8180 and Ralink
rt28xx wireless drivers.
Jakub Kicinski [Wed, 26 Jun 2024 00:48:35 +0000 (17:48 -0700)]
Merge branch 'gve-add-flow-steering-support'
Ziwei Xiao says:
====================
gve: Add flow steering support
To support flow steering in GVE driver, there are two adminq changes
need to be made in advance.
The first one is adding adminq mutex lock, which is to allow the
incoming flow steering operations to be able to temporarily drop the
rtnl_lock to reduce the latency for registering flow rules among
several NICs at the same time. This could be achieved by the future
changes to reduce the drivers' dependencies on the rtnl lock for
particular ethtool ops.
The second one is to add the extended adminq command so that we can
support larger adminq command such as configure_flow_rule command. In
that patch, there is a new added function called
gve_adminq_execute_extended_cmd with the attribute of __maybe_unused.
That attribute will be removed in the third patch of this series where
it will use the previously unused function.
And the other three patches are needed for the actual flow steering
feature support in driver.
====================
Jeroen de Borst [Tue, 25 Jun 2024 00:12:31 +0000 (00:12 +0000)]
gve: Add flow steering ethtool support
Implement the ethtool commands that can be used to configure and query
flow-steering rules.
A large part of this change consists of translating the ethtool
representation of 'ntuples' to our internal gve_flow_rule and vice-versa
in the new created gve_flow_rule.c
Considering the possible large amount of flow rules, the driver doesn't
store all the rules locally. When the user runs 'ethtool -n <nic>' to
check the registered rules, the driver will send adminq command to
query a limited amount of rules/rule ids(that filled in a 4096 bytes dma
memory) at a time as a cache for the ethtool queries. The adminq query
commands will be repeated for several times until the ethtool has
queried all the needed rules.
Signed-off-by: Jeroen de Borst <jeroendb@google.com> Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240625001232.1476315-6-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jeroen de Borst [Tue, 25 Jun 2024 00:12:30 +0000 (00:12 +0000)]
gve: Add flow steering adminq commands
Add new adminq commands for the driver to configure and query flow rules
that are stored in the device. Flow steering rules are assigned with a
location that determines the relative order of the rules.
Flow rules can run up to an order of millions. In such cases, storing
a full copy of the rules in the driver to prepare for the ethtool query
is infeasible while querying them from the device is better. That needs
to be optimized too so that we don't send a lot of adminq commands. The
solution here is to store a limited number of rules/rule ids in the
driver in a cache. Use dma_pool to allocate 4k bytes which lets device
write at most 46 flow rules(4096/88) or 1024 rule ids(4096/4) at a time.
For configuring flow rules, there are 3 sub-commands:
- ADD which adds a rule at the location supplied
- DEL which deletes the rule at the location supplied
- RESET which clears all currently active rules in the device
For querying flow rules, there are also 3 sub-commands:
- QUERY_RULES corresponds to ETHTOOL_GRXCLSRULE. It fills the rules in
the allocated cache after querying the device
- QUERY_RULES_IDS corresponds to ETHTOOL_GRXCLSRLALL. It fills the
rule_ids in the allocated cache after querying the device
- QUERY_RULES_STATS corresponds to ETHTOOL_GRXCLSRLCNT. It queries the
device's current flow rule number and the supported max flow rule
limit
Signed-off-by: Jeroen de Borst <jeroendb@google.com> Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240625001232.1476315-5-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jeroen de Borst [Tue, 25 Jun 2024 00:12:29 +0000 (00:12 +0000)]
gve: Add flow steering device option
Add a new device option to signal to the driver that the device supports
flow steering. This device option also carries the maximum number of
flow steering rules that the device can store.
Signed-off-by: Jeroen de Borst <jeroendb@google.com> Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240625001232.1476315-4-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jeroen de Borst [Tue, 25 Jun 2024 00:12:28 +0000 (00:12 +0000)]
gve: Add adminq extended command
The adminq command is limited to 64 bytes per entry and it's 56 bytes
for the command itself at maximum. To support larger commands, we need
to dma_alloc a separate memory to put the command in that memory and
send the dma memory address instead of the actual command.
Introduce an extended adminq command to wrap the real command with the
inner opcode and the allocated dma memory address specified. Once the
device receives it, it can get the real command from the given dma
memory address. As designed with the device, all the extended commands
will use inner opcode larger than 0xFF.
Signed-off-by: Jeroen de Borst <jeroendb@google.com> Co-developed-by: Ziwei Xiao <ziweixiao@google.com> Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240625001232.1476315-3-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Ziwei Xiao [Tue, 25 Jun 2024 00:12:27 +0000 (00:12 +0000)]
gve: Add adminq mutex lock
We were depending on the rtnl_lock to make sure there is only one adminq
command running at a time. But some commands may take too long to hold
the rtnl_lock, such as the upcoming flow steering operations. For such
situations, it can temporarily drop the rtnl_lock, and replace it for
these operations with a new adminq lock, which can ensure the adminq
command execution to be thread-safe.
Signed-off-by: Ziwei Xiao <ziweixiao@google.com> Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com> Reviewed-by: Harshitha Ramamurthy <hramamurthy@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20240625001232.1476315-2-ziweixiao@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
ethtool: provide the dim profile fine-tuning channel
The NetDIM library provides excellent acceleration for many modern
network cards. However, the default profiles of DIM limits its maximum
capabilities for different NICs, so providing a way which the NIC can
be custom configured is necessary.
Currently, the way is based on the commonly used "ethtool -C".
For example,
on the server side, the virtio-net NIC with rx dim enabled has 8
queues and runs nginx.
The client uses the following command to send traffic to the server:
./wrk http://server_ip:80 -c 64 -t 5 -d 30
Then adjust the default rx-profile for server dim to
Heng Qi [Fri, 21 Jun 2024 10:13:53 +0000 (18:13 +0800)]
virtio-net: support dim profile fine-tuning
Virtio-net has different types of back-end device implementations.
In order to effectively optimize the dim library's gains for different
device implementations, let's use the new interface params to
initialize and query dim results from a customized profile list.
Heng Qi [Fri, 21 Jun 2024 10:13:51 +0000 (18:13 +0800)]
ethtool: provide customized dim profile management
The NetDIM library, currently leveraged by an array of NICs, delivers
excellent acceleration benefits. Nevertheless, NICs vary significantly
in their dim profile list prerequisites.
Specifically, virtio-net backends may present diverse sw or hw device
implementation, making a one-size-fits-all parameter list impractical.
On Alibaba Cloud, the virtio DPU's performance under the default DIM
profile falls short of expectations, partly due to a mismatch in
parameter configuration.
I also noticed that ice/idpf/ena and other NICs have customized
profilelist or placed some restrictions on dim capabilities.
Motivated by this, I tried adding new params for "ethtool -C" that provides
a per-device control to modify and access a device's interrupt parameters.
Usage
========
The target NIC is named ethx.
Assume that ethx only declares support for rx profile setting
(with DIM_PROFILE_RX flag set in profile_flags) and supports modification
of usec and pkt fields.
1. Query the currently customized list of the device
Heng Qi [Fri, 21 Jun 2024 10:13:50 +0000 (18:13 +0800)]
dim: make DIMLIB dependent on NET
DIMLIB's capabilities are supplied by the dim, net_dim, and
rdma_dim objects, and dim's interfaces solely act as a base for
net_dim and rdma_dim and are not explicitly used anywhere else.
rdma_dim is utilized by the infiniband driver, while net_dim
is for network devices, excluding the soc/fsl driver.
In this patch, net_dim relies on some NET's interfaces, thus
DIMLIB needs to explicitly depend on the NET Kconfig.
The soc/fsl driver uses the functions provided by net_dim, so
it also needs to depend on NET.
Jakub Kicinski [Wed, 26 Jun 2024 00:07:06 +0000 (17:07 -0700)]
Merge branch 'ravb-add-mii-support-for-r-car-v4m'
Geert Uytterhoeven says:
====================
ravb: Add MII support for R-Car V4M
All EtherAVB instances on R-Car Gen3/Gen4 SoCs support the RGMII
interface. In addition, the first two EtherAVB instances on R-Car V4M
also support the MII interface, but this is not yet supported by the
driver. This patch series adds support for MII on R-Car Gen4, after the
customary cleanup.
The corresponding pin control support is available in [1].
Compile-tested only, as all AVB interfaces on the Gray Hawk Single
development board are connected to RGMII PHYs.
No regressions on R-Car V4H.
Geert Uytterhoeven [Mon, 24 Jun 2024 13:25:25 +0000 (15:25 +0200)]
ravb: Add MII support for R-Car V4M
All EtherAVB instances on R-Car Gen3/Gen4 SoCs support the RGMII
interface. In addition, the first two EtherAVB instances on R-Car V4M
also support the MII interface, but this is not yet supported by the
driver.
Add support for MII on R-Car Gen4 by adding an R-Car Gen4-specific EMAC
initialization function that selects the MII clock instead of the RGMII
clock when the PHY interface is MII. Note that all implementations of
EtherAVB on R-Car Gen4 SoCs have the APSR register, but only MII-capable
instances are documented to have the MIISELECT bit, which has a
documented value of zero when reserved.
Li RongQing [Fri, 21 Jun 2024 09:45:52 +0000 (17:45 +0800)]
virtio_net: Remove u64_stats_update_begin()/end() for stats fetch
This place is fetching the stats, u64_stats_update_begin()/end()
should not be used, and the fetcher of stats is in the same context
as the updater of the stats, so don't need any protection
James Chapman [Mon, 24 Jun 2024 08:29:45 +0000 (09:29 +0100)]
l2tp: remove incorrect __rcu attribute
This fixes a sparse warning.
Fixes: d18d3f0a24fc ("l2tp: replace hlist with simple list for per-tunnel session list") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202406220754.evK8Hrjw-lkp@intel.com/ Signed-off-by: James Chapman <jchapman@katalix.com> Link: https://patch.msgid.link/20240624082945.1925009-1-jchapman@katalix.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Elad Yifee [Sun, 23 Jun 2024 17:51:09 +0000 (20:51 +0300)]
net: ethernet: mtk_eth_soc: ppe: prevent ppe update for non-mtk devices
Introduce an additional validation to ensure that the PPE index
is modified exclusively for mtk_eth ingress devices.
This primarily addresses the issue related
to WED operation with multiple PPEs.
Paolo Abeni [Tue, 25 Jun 2024 09:53:09 +0000 (11:53 +0200)]
Merge branch 'net-macb-wol-enhancements'
Vineeth Karumanchi says:
====================
net: macb: WOL enhancements
- Add provisioning for queue tie-off and queue disable during suspend.
- Add support for ARP packet types to WoL.
- Advertise WoL attributes by default.
- Extend MACB supported WoL modes to the PHY supported WoL modes.
- Deprecate magic-packet property.
WOL modes such as magic-packet should be an OS policy.
By default, advertise supported modes and use ethtool to activate
the required mode.
Suggested-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: Vineeth Karumanchi <vineeth.karumanchi@amd.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Vineeth Karumanchi [Fri, 21 Jun 2024 04:57:34 +0000 (10:27 +0530)]
net: macb: Add ARP support to WOL
Extend wake-on LAN support with an ARP packet.
Currently, if PHY supports WOL, ethtool ignores the modes supported
by MACB. This change extends the WOL modes with MACB supported modes.
Advertise wake-on LAN supported modes by default without relying on
dt node. By default, wake-on LAN will be in disabled state.
Using ethtool, users can enable/disable or choose packet types.
For wake-on LAN via ARP, ensure the IP address is assigned and
report an error otherwise.
Vineeth Karumanchi [Fri, 21 Jun 2024 04:57:32 +0000 (10:27 +0530)]
net: macb: queue tie-off or disable during WOL suspend
When GEM is used as a wake device, it is not mandatory for the RX DMA
to be active. The RX engine in IP only needs to receive and identify
a wake packet through an interrupt. The wake packet is of no further
significance; hence, it is not required to be copied into memory.
By disabling RX DMA during suspend, we can avoid unnecessary DMA
processing of any incoming traffic.
During suspend, perform either of the below operations:
- tie-off/dummy descriptor: Disable unused queues by connecting
them to a looped descriptor chain without free slots.
- queue disable: The newer IP version allows disabling individual queues.
====================
af_unix: Remove spin_lock_nested() and convert to lock_cmp_fn.
This series removes spin_lock_nested() in AF_UNIX and instead
defines the locking orders as functions tied to each lock by
lockdep_set_lock_cmp_fn().
When the defined function returns a negative value, lockdep
considers it will not cause deadlock. (See ->cmp_fn() in
check_deadlock() and check_prev_add().)
When we cannot define the total ordering, we return -1 for
the allowed ordering and otherwise 0 as undefined. [0]
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:23 +0000 (13:56 -0700)]
af_unix: Don't use spin_lock_nested() in copy_peercred().
When (AF_UNIX, SOCK_STREAM) socket connect()s to a listening socket,
the listener's sk_peer_pid/sk_peer_cred are copied to the client in
copy_peercred().
Then, two sk_peer_locks are held there; one is client's and another
is listener's.
However, the latter is not needed because we hold the listner's
unix_state_lock() there and unix_listen() cannot update the cred
concurrently.
Let's drop the unnecessary spin_lock() and use the bare spin_lock()
for the client to protect concurrent read by getsockopt(SO_PEERCRED).
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:22 +0000 (13:56 -0700)]
af_unix: Remove put_pid()/put_cred() in copy_peercred().
When (AF_UNIX, SOCK_STREAM) socket connect()s to a listening socket,
the listener's sk_peer_pid/sk_peer_cred are copied to the client in
copy_peercred().
Then, the client's sk_peer_pid and sk_peer_cred are always NULL, so
we need not call put_pid() and put_cred() there.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:19 +0000 (13:56 -0700)]
af_unix: Remove U_LOCK_GC_LISTENER.
Commit 1971d13ffa84 ("af_unix: Suppress false-positive lockdep splat for
spin_lock() in __unix_gc().") added U_LOCK_GC_LISTENER for the old GC,
but it's no longer needed for the new GC.
Let's remove U_LOCK_GC_LISTENER and unix_state_lock_nested() as there's
no user.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:18 +0000 (13:56 -0700)]
af_unix: Remove U_LOCK_DIAG.
sk_diag_dump_icons() acquires embryo's lock by unix_state_lock_nested()
to fetch its peer.
The embryo's ->peer is set to NULL only when its parent listener is
close()d. Then, unix_release_sock() is called for each embryo after
unlinking skb by skb_dequeue().
In sk_diag_dump_icons(), we hold the parent's recvq lock, so we need
not acquire unix_state_lock_nested(), and peer is always non-NULL.
Let's remove unnecessary unix_state_lock_nested() and non-NULL test
for peer.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:17 +0000 (13:56 -0700)]
af_unix: Don't acquire unix_state_lock() for sock_i_ino().
sk_diag_dump_peer() and sk_diag_dump() call unix_state_lock() for
sock_i_ino() which reads SOCK_INODE(sk->sk_socket)->i_ino, but it's
protected by sk->sk_callback_lock.
Let's remove unnecessary unix_state_lock().
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:16 +0000 (13:56 -0700)]
af_unix: Define locking order for U_LOCK_SECOND in unix_stream_connect().
While a SOCK_(STREAM|SEQPACKET) socket connect()s to another, we hold
two locks of them by unix_state_lock() and unix_state_lock_nested() in
unix_stream_connect().
Before unix_state_lock_nested(), the following is guaranteed by checking
sk->sk_state:
1. The first socket is TCP_LISTEN
2. The second socket is not the first one
3. Simultaneous connect() must fail
So, the client state can be TCP_CLOSE or TCP_LISTEN or TCP_ESTABLISHED.
Let's define the expected states as unix_state_lock_cmp_fn() instead of
using unix_state_lock_nested().
Note that 2. is detected by debug_spin_lock_before() and 3. cannot be
expressed as lock_cmp_fn.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Kuniyuki Iwashima [Thu, 20 Jun 2024 20:56:15 +0000 (13:56 -0700)]
af_unix: Don't retry after unix_state_lock_nested() in unix_stream_connect().
When a SOCK_(STREAM|SEQPACKET) socket connect()s to another one, we need
to lock the two sockets to check their states in unix_stream_connect().
We use unix_state_lock() for the server and unix_state_lock_nested() for
client with tricky sk->sk_state check to avoid deadlock.
The possible deadlock scenario are the following:
1) Self connect()
2) Simultaneous connect()
The former is simple, attempt to grab the same lock, and the latter is
AB-BA deadlock.
After the server's unix_state_lock(), we check the server socket's state,
and if it's not TCP_LISTEN, connect() fails with -EINVAL.
Then, we avoid the former deadlock by checking the client's state before
unix_state_lock_nested(). If its state is not TCP_LISTEN, we can make
sure that the client and the server are not identical based on the state.
Also, the latter deadlock can be avoided in the same way. Due to the
server sk->sk_state requirement, AB-BA deadlock could happen only with
TCP_LISTEN sockets. So, if the client's state is TCP_LISTEN, we can
give up the second lock to avoid the deadlock.
CPU 1 CPU 2 CPU 3
connect(A -> B) connect(B -> A) listen(A)
--- --- ---
unix_state_lock(B)
B->sk_state == TCP_LISTEN
READ_ONCE(A->sk_state) == TCP_CLOSE
^^^^^^^^^
ok, will lock A unix_state_lock(A)
.--------------' WRITE_ONCE(A->sk_state, TCP_LISTEN)
| unix_state_unlock(A)
|
| unix_state_lock(A)
| A->sk_sk_state == TCP_LISTEN
| READ_ONCE(B->sk_state) == TCP_LISTEN
v ^^^^^^^^^^
unix_state_lock_nested(A) Don't lock B !!
Currently, while checking the client's state, we also check if it's
TCP_ESTABLISHED, but this is unlikely and can be checked after we know
the state is not TCP_CLOSE.
Moreover, if it happens after the second lock, we now jump to the restart
label, but it's unlikely that the server is not found during the retry,
so the jump is mostly to revist the client state check.
Let's remove the retry logic and check the state against TCP_CLOSE first.
Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Disabling bottoms halves acts as per-CPU BKL. On PREEMPT_RT code within
local_bh_disable() section remains preemtible. As a result high prior
tasks (or threaded interrupts) will be blocked by lower-prio task (or
threaded interrupts) which are long running which includes softirq
sections.
The proposed way out is to introduce explicit per-CPU locks for
resources which are protected by local_bh_disable() and use those only
on PREEMPT_RT so there is no additional overhead for !PREEMPT_RT builds.
The series introduces the infrastructure and converts large parts of
networking which is largest stake holder here. Once this done the
per-CPU lock from local_bh_disable() on PREEMPT_RT can be lifted.
Performance testing. Baseline is net-next as of commit 93bda33046e7a
("Merge branch'net-constify-ctl_table-arguments-of-utility-functions'")
plus v6.10-rc1. A 10GiG link is used between two hosts. The command
xdp-bench redirect-cpu --cpu 3 --remote-action drop eth1 -e
was invoked on the receiving side with a ixgbe. The sending side uses
pktgen_sample03_burst_single_flow.sh on i40e.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:22:05 +0000 (15:22 +0200)]
net: Move per-CPU flush-lists to bpf_net_context on PREEMPT_RT.
The per-CPU flush lists, which are accessed from within the NAPI callback
(xdp_do_flush() for instance), are per-CPU. There are subject to the
same problem as struct bpf_redirect_info.
Add the per-CPU lists cpu_map_flush_list, dev_map_flush_list and
xskmap_map_flush_list to struct bpf_net_context. Add wrappers for the
access. The lists initialized on first usage (similar to
bpf_net_ctx_get_ri()).
Cc: "Björn Töpel" <bjorn@kernel.org> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: Jonathan Lemon <jonathan.lemon@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Cc: Magnus Karlsson <magnus.karlsson@intel.com> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Yonghong Song <yonghong.song@linux.dev> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-16-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:22:04 +0000 (15:22 +0200)]
net: Reference bpf_redirect_info via task_struct on PREEMPT_RT.
The XDP redirect process is two staged:
- bpf_prog_run_xdp() is invoked to run a eBPF program which inspects the
packet and makes decisions. While doing that, the per-CPU variable
bpf_redirect_info is used.
- Afterwards xdp_do_redirect() is invoked and accesses bpf_redirect_info
and it may also access other per-CPU variables like xskmap_flush_list.
At the very end of the NAPI callback, xdp_do_flush() is invoked which
does not access bpf_redirect_info but will touch the individual per-CPU
lists.
The per-CPU variables are only used in the NAPI callback hence disabling
bottom halves is the only protection mechanism. Users from preemptible
context (like cpu_map_kthread_run()) explicitly disable bottom halves
for protections reasons.
Without locking in local_bh_disable() on PREEMPT_RT this data structure
requires explicit locking.
PREEMPT_RT has forced-threaded interrupts enabled and every
NAPI-callback runs in a thread. If each thread has its own data
structure then locking can be avoided.
Create a struct bpf_net_context which contains struct bpf_redirect_info.
Define the variable on stack, use bpf_net_ctx_set() to save a pointer to
it, bpf_net_ctx_clear() removes it again.
The bpf_net_ctx_set() may nest. For instance a function can be used from
within NET_RX_SOFTIRQ/ net_rx_action which uses bpf_net_ctx_set() and
NET_TX_SOFTIRQ which does not. Therefore only the first invocations
updates the pointer.
Use bpf_net_ctx_get_ri() as a wrapper to retrieve the current struct
bpf_redirect_info. The returned data structure is zero initialized to
ensure nothing is leaked from stack. This is done on first usage of the
struct. bpf_net_ctx_set() sets bpf_redirect_info::kern_flags to 0 to
note that initialisation is required. First invocation of
bpf_net_ctx_get_ri() will memset() the data structure and update
bpf_redirect_info::kern_flags.
bpf_redirect_info::nh is excluded from memset because it is only used
once BPF_F_NEIGH is set which also sets the nh member. The kern_flags is
moved past nh to exclude it from memset.
The pointer to bpf_net_context is saved task's task_struct. Using
always the bpf_net_context approach has the advantage that there is
almost zero differences between PREEMPT_RT and non-PREEMPT_RT builds.
Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Eduard Zingerman <eddyz87@gmail.com> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Yonghong Song <yonghong.song@linux.dev> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-15-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:22:03 +0000 (15:22 +0200)]
net: Use nested-BH locking for bpf_scratchpad.
bpf_scratchpad is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Add a local_lock_t to the data structure and use local_lock_nested_bh()
for locking. This change adds only lockdep coverage and does not alter
the functional behaviour for !PREEMPT_RT.
Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-14-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:22:02 +0000 (15:22 +0200)]
seg6: Use nested-BH locking for seg6_bpf_srh_states.
The access to seg6_bpf_srh_states is protected by disabling preemption.
Based on the code, the entry point is input_action_end_bpf() and
every other function (the bpf helper functions bpf_lwt_seg6_*()), that
is accessing seg6_bpf_srh_states, should be called from within
input_action_end_bpf().
input_action_end_bpf() accesses seg6_bpf_srh_states first at the top of
the function and then disables preemption. This looks wrong because if
preemption needs to be disabled as part of the locking mechanism then
the variable shouldn't be accessed beforehand.
Looking at how it is used via test_lwt_seg6local.sh then
input_action_end_bpf() is always invoked from softirq context. If this
is always the case then the preempt_disable() statement is superfluous.
If this is not always invoked from softirq then disabling only
preemption is not sufficient.
Replace the preempt_disable() statement with nested-BH locking. This is
not an equivalent replacement as it assumes that the invocation of
input_action_end_bpf() always occurs in softirq context and thus the
preempt_disable() is superfluous.
Add a local_lock_t the data structure and use local_lock_nested_bh() for
locking. Add lockdep_assert_held() to ensure the lock is held while the
per-CPU variable is referenced in the helper functions.
Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrii Nakryiko <andrii@kernel.org> Cc: David Ahern <dsahern@kernel.org> Cc: Hao Luo <haoluo@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Cc: KP Singh <kpsingh@kernel.org> Cc: Martin KaFai Lau <martin.lau@linux.dev> Cc: Song Liu <song@kernel.org> Cc: Stanislav Fomichev <sdf@google.com> Cc: Yonghong Song <yonghong.song@linux.dev> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-13-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:22:00 +0000 (15:22 +0200)]
dev: Use nested-BH locking for softnet_data.process_queue.
softnet_data::process_queue is a per-CPU variable and relies on disabled
BH for its locking. Without per-CPU locking in local_bh_disable() on
PREEMPT_RT this data structure requires explicit locking.
softnet_data::input_queue_head can be updated lockless. This is fine
because this value is only update CPU local by the local backlog_napi
thread.
Add a local_lock_t to softnet_data and use local_lock_nested_bh() for locking
of process_queue. This change adds only lockdep coverage and does not
alter the functional behaviour for !PREEMPT_RT.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:59 +0000 (15:21 +0200)]
dev: Remove PREEMPT_RT ifdefs from backlog_lock.*().
The backlog_napi locking (previously RPS) relies on explicit locking if
either RPS or backlog NAPI is enabled. If both are disabled then locking
was achieved by disabling interrupts except on PREEMPT_RT. PREEMPT_RT
was excluded because the needed synchronisation was already provided
local_bh_disable().
Since the introduction of backlog NAPI and making it mandatory for
PREEMPT_RT the ifdef within backlog_lock.*() is obsolete and can be
removed.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:58 +0000 (15:21 +0200)]
net: softnet_data: Make xmit per task.
Softirq is preemptible on PREEMPT_RT. Without a per-CPU lock in
local_bh_disable() there is no guarantee that only one device is
transmitting at a time.
With preemption and multiple senders it is possible that the per-CPU
`recursion' counter gets incremented by different threads and exceeds
XMIT_RECURSION_LIMIT leading to a false positive recursion alert.
The `more' member is subject to similar problems if set by one thread
for one driver and wrongly used by another driver within another thread.
Instead of adding a lock to protect the per-CPU variable it is simpler
to make xmit per-task. Sending and receiving skbs happens always
in thread context anyway.
Having a lock to protected the per-CPU counter would block/ serialize two
sending threads needlessly. It would also require a recursive lock to
ensure that the owner can increment the counter further.
Make the softnet_data.xmit a task_struct member on PREEMPT_RT. Add
needed wrapper.
Cc: Ben Segall <bsegall@google.com> Cc: Daniel Bristot de Oliveira <bristot@redhat.com> Cc: Dietmar Eggemann <dietmar.eggemann@arm.com> Cc: Juri Lelli <juri.lelli@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Valentin Schneider <vschneid@redhat.com> Cc: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-9-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:57 +0000 (15:21 +0200)]
netfilter: br_netfilter: Use nested-BH locking for brnf_frag_data_storage.
brnf_frag_data_storage is a per-CPU variable and relies on disabled BH
for its locking. Without per-CPU locking in local_bh_disable() on
PREEMPT_RT this data structure requires explicit locking.
Add a local_lock_t to the data structure and use local_lock_nested_bh()
for locking. This change adds only lockdep coverage and does not alter
the functional behaviour for !PREEMPT_RT.
Cc: Florian Westphal <fw@strlen.de> Cc: Jozsef Kadlecsik <kadlec@netfilter.org> Cc: Nikolay Aleksandrov <razor@blackwall.org> Cc: Pablo Neira Ayuso <pablo@netfilter.org> Cc: Roopa Prabhu <roopa@nvidia.com> Cc: bridge@lists.linux.dev Cc: coreteam@netfilter.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-8-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:56 +0000 (15:21 +0200)]
net/ipv4: Use nested-BH locking for ipv4_tcp_sk.
ipv4_tcp_sk is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Make a struct with a sock member (original ipv4_tcp_sk) and a
local_lock_t and use local_lock_nested_bh() for locking. This change
adds only lockdep coverage and does not alter the functional behaviour
for !PREEMPT_RT.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:55 +0000 (15:21 +0200)]
net/tcp_sigpool: Use nested-BH locking for sigpool_scratch.
sigpool_scratch is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Make a struct with a pad member (original sigpool_scratch) and a
local_lock_t and use local_lock_nested_bh() for locking. This change
adds only lockdep coverage and does not alter the functional behaviour
for !PREEMPT_RT.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:54 +0000 (15:21 +0200)]
net: Use nested-BH locking for napi_alloc_cache.
napi_alloc_cache is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Add a local_lock_t to the data structure and use local_lock_nested_bh()
for locking. This change adds only lockdep coverage and does not alter
the functional behaviour for !PREEMPT_RT.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:53 +0000 (15:21 +0200)]
net: Use __napi_alloc_frag_align() instead of open coding it.
The else condition within __netdev_alloc_frag_align() is an open coded
__napi_alloc_frag_align().
Use __napi_alloc_frag_align() instead of open coding it.
Move fragsz assignment before page_frag_alloc_align() invocation because
__napi_alloc_frag_align() also contains this statement.
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:52 +0000 (15:21 +0200)]
locking/local_lock: Add local nested BH locking infrastructure.
Add local_lock_nested_bh() locking. It is based on local_lock_t and the
naming follows the preempt_disable_nested() example.
For !PREEMPT_RT + !LOCKDEP it is a per-CPU annotation for locking
assumptions based on local_bh_disable(). The macro is optimized away
during compilation.
For !PREEMPT_RT + LOCKDEP the local_lock_nested_bh() is reduced to
the usual lock-acquire plus lockdep_assert_in_softirq() - ensuring that
BH is disabled.
For PREEMPT_RT local_lock_nested_bh() acquires the specified per-CPU
lock. It does not disable CPU migration because it relies on
local_bh_disable() disabling CPU migration.
With LOCKDEP it performans the usual lockdep checks as with !PREEMPT_RT.
Due to include hell the softirq check has been moved spinlock.c.
The intention is to use this locking in places where locking of a per-CPU
variable relies on BH being disabled. Instead of treating disabled
bottom halves as a big per-CPU lock, PREEMPT_RT can use this to reduce
the locking scope to what actually needs protecting.
A side effect is that it also documents the protection scope of the
per-CPU variables.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-3-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Sebastian Andrzej Siewior [Thu, 20 Jun 2024 13:21:51 +0000 (15:21 +0200)]
locking/local_lock: Introduce guard definition for local_lock.
Introduce lock guard definition for local_lock_t. There are no users
yet.
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20240620132727.660738-2-bigeasy@linutronix.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Lukas Bulwahn [Mon, 24 Jun 2024 07:03:26 +0000 (09:03 +0200)]
MAINTAINERS: adjust file entry in FREESCALE QORIQ DPAA FMAN DRIVER
Commit 243996d172a6 ("dt-bindings: net: Convert fsl-fman to yaml") splits
the previous dt text file into four yaml files. It adjusts a corresponding
file entry in MAINTAINERS from txt to yaml, but this adjustment misses
that the file was split and renamed.
Hence, ./scripts/get_maintainer.pl --self-test=patterns complains about a
broken reference.
Adjust the file entry to match the four yaml files resulting from this
commit above.
Signed-off-by: Lukas Bulwahn <lukas.bulwahn@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Sun, 23 Jun 2024 11:50:19 +0000 (12:50 +0100)]
Merge branch '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue
Tony Nguyen says:
====================
ice: prepare representor for SF support
Michal Swiatkowski says:
This is a series to prepare port representor for supporting also
subfunctions. We need correct devlink locking and the possibility to
update parent VSI after port representor is created.
Refactor how devlink lock is taken to suite the subfunction use case.
VSI configuration needs to be done after port representor is created.
Port representor needs only allocated VSI. It doesn't need to be
configured before.
VSI needs to be reconfigured when update function is called.
The code for this patchset was split from (too big) patchset [1].
[1] https://lore.kernel.org/netdev/20240213072724.77275-1-michal.swiatkowski@linux.intel.com/
---
Originally from https://lore.kernel.org/netdev/20240605-next-2024-06-03-intel-next-batch-v2-0-39c23963fa78@intel.com/
Changes:
- delete ice_repr_get_by_vsi() from header
- rephrase commit message in moving devlink locking
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
The first 2 patches are by Andy Shevchenko, one cleans up the includes
in the mcp251x driver, the other one updates the sja100 plx_pci driver
to make use of predefines PCI subvendor ID.
Mans Rullgard's patch cleans up the Kconfig help text of for the slcan
driver.
Oliver Hartkopp provides a patch to update the documentation, which
removes the ISO 15675-2 specification version where possible.
The next 2 patches are by Harini T and update the documentation of the
xilinx_can driver.
Francesco Valla provides documentation for the ISO 15765-2 protocol.
A patch by Dr. David Alan Gilbert removes an unused struct from the
mscan driver.
12 patches are by Martin Jocic. The first three add support for 3 new
devices to the kvaser_usb driver. The remaining 9 first clean up the
kvaser_pciefd driver, and then add support for MSI.
Krzysztof Kozlowski contributes 3 patches simplifies the CAN SPI
drivers by making use of spi_get_device_match_data().
The last patch is by Martin Hundebøll, which reworks the m_can driver
to not enable the CAN transceiver during probe.
* tag 'linux-can-next-for-6.11-20240621' of git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next: (24 commits)
can: m_can: don't enable transceiver when probing
can: mcp251xfd: simplify with spi_get_device_match_data()
can: mcp251x: simplify with spi_get_device_match_data()
can: hi311x: simplify with spi_get_device_match_data()
can: kvaser_pciefd: Add MSI interrupts
can: kvaser_pciefd: Move reset of DMA RX buffers to the end of the ISR
can: kvaser_pciefd: Change name of return code variable
can: kvaser_pciefd: Rename board_irq to pci_irq
can: kvaser_pciefd: Add unlikely
can: kvaser_pciefd: Add inline
can: kvaser_pciefd: Remove unnecessary comment
can: kvaser_pciefd: Skip redundant NULL pointer check in ISR
can: kvaser_pciefd: Group #defines together
can: kvaser_usb: Add support for Kvaser Mini PCIe 1xCAN
can: kvaser_usb: Add support for Kvaser USBcan Pro 5xCAN
can: kvaser_usb: Add support for Vining 800
can: mscan: remove unused struct 'mscan_state'
Documentation: networking: document ISO 15765-2
can: xilinx_can: Document driver description to list all supported IPs
can: isotp: remove ISO 15675-2 specification version where possible
...
====================
Michal Swiatkowski [Mon, 10 Jun 2024 07:44:31 +0000 (09:44 +0200)]
ice: store representor ID in bridge port
It is used to get representor structure during cleaning.
Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Sujai Buvaneswaran <sujai.buvaneswaran@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Taehee Yoo [Fri, 21 Jun 2024 04:16:37 +0000 (04:16 +0000)]
selftests: net: change shebang to bash in amt.sh
amt.sh is written in bash, not sh.
So, shebang should be bash.
Signed-off-by: Taehee Yoo <ap420073@gmail.com> Acked-by: Muhammad Usama Anjum <usama.anjum@collabora.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Niklas Söderlund [Thu, 20 Jun 2024 18:22:19 +0000 (20:22 +0200)]
net: ethernet: rtsn: Add support for Renesas Ethernet-TSN
Add initial support for Renesas Ethernet-TSN End-station device of R-Car
V4H. The Ethernet End-station can connect to an Ethernet network using a
10 Mbps, 100 Mbps, or 1 Gbps full-duplex link via MII/GMII/RMII/RGMII.
Depending on the connected PHY.
The driver supports Rx checksum and offload and hardware timestamps.
While full power management and suspend/resume is not yet supported the
driver enables runtime PM in order to enable the module clock. While
explicit clock management using clk_enable() would suffice for the
supported SoC, the module could be reused on SoCs where the module is
part of a power domain.
Signed-off-by: Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Signed-off-by: David S. Miller <davem@davemloft.net>
Matthias Schiffer [Thu, 20 Jun 2024 17:25:50 +0000 (19:25 +0200)]
net: dsa: qca8k: add support for bridge port isolation
Remove a pair of ports from the port matrix when both ports have the
isolated flag set.
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Matthias Schiffer [Thu, 20 Jun 2024 17:25:49 +0000 (19:25 +0200)]
net: dsa: qca8k: factor out bridge join/leave logic
Most of the logic in qca8k_port_bridge_join() and qca8k_port_bridge_leave()
is the same. Refactor to reduce duplication and prepare for reusing the
code for implementing bridge port isolation.
dsa_port_offloads_bridge_dev() is used instead of
dsa_port_offloads_bridge(), passing the bridge in as a struct netdevice *,
as we won't have a struct dsa_bridge in qca8k_port_bridge_flags().
The error handling is changed slightly in the bridge leave case,
returning early and emitting an error message when a regmap access fails.
This shouldn't matter in practice, as there isn't much we can do if
communication with the switch breaks down in the middle of reconfiguration.
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Matthias Schiffer [Thu, 20 Jun 2024 17:25:48 +0000 (19:25 +0200)]
net: dsa: qca8k: do not write port mask twice in bridge join/leave
qca8k_port_bridge_join() set QCA8K_PORT_LOOKUP_CTRL() for i == port twice,
once in the loop handling all other port's masks, and finally at the end
with the accumulated port_mask.
The first time it would incorrectly set the port's own bit in the mask,
only to correct the mistake a moment later. qca8k_port_bridge_leave() had
the same issue, but here the regmap_clear_bits() was a no-op rather than
setting an unintended value.
Remove the duplicate assignment by skipping the whole loop iteration for
i == port. The unintended bit setting doesn't seem to have any negative
effects (even when not reverted right away), so the change is submitted
as a simple cleanup rather than a fix.
Signed-off-by: Matthias Schiffer <mschiffer@universe-factory.net> Reviewed-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 21 Jun 2024 11:12:43 +0000 (12:12 +0100)]
Merge branch 'net-mscc-miim-switch-reset'
Herve Codina says:
====================
Handle switch reset in mscc-miim
These two patches were previously sent as part of a bigger series:
https://lore.kernel.org/lkml/20240527161450.326615-1-herve.codina@bootlin.com/
v1 and v2 iterations were handled during the v1 and v2 reviews of this
bigger series. As theses two patches are now ready to be applied, they
were extracted from the bigger series and sent alone in this current
series.
This current v3 series takes into account feedback received during the
bigger series v2 review.
Changes v2 -> v3
- patch 1
Drop one useless sentence.
Add 'Reviewed-by: Andrew Lunn <andrew@lunn.ch>'
Add 'Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>'
- patch 2
Add 'Reviewed-by: Andrew Lunn <andrew@lunn.ch>'
Changes v1 -> v2 (as part of the bigger series iterations)
- Patch 1
Improve the reset property description
- Patch 2
Fix a wrong reverse x-mass tree declaration
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Herve Codina [Thu, 20 Jun 2024 12:01:24 +0000 (14:01 +0200)]
dt-bindings: net: mscc-miim: Add resets property
Add the (optional) resets property.
The mscc-miim device is impacted by the switch reset especially when the
mscc-miim device is used as part of the LAN966x PCI device.
Signed-off-by: Herve Codina <herve.codina@bootlin.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 21 Jun 2024 10:33:34 +0000 (11:33 +0100)]
Merge branch 'l2tp-sk_user_data'
James Chapman says:
====================
l2tp: don't use the tunnel socket's sk_user_data in datapath
This series refactors l2tp to not use the tunnel socket's sk_user_data
in the datapath. The main reasons for doing this are
* to allow for simplifying internal socket cleanup code (to be done
in a later series)
* to support multiple L2TPv3 UDP tunnels using the same 5-tuple
address
When handling received UDP frames, l2tp's current approach is to look
up a session in a per-tunnel list. l2tp uses the tunnel socket's
sk_user_data to derive the tunnel context from the receiving socket.
But this results in the socket and tunnel lifetimes being very tightly
coupled and the tunnel/socket cleanup paths being complicated. The
latter has historically been a source of l2tp bugs and makes the code
more difficult to maintain. Also, if sockets are aliased, we can't
trust that the socket's sk_user_data references the right tunnel
anyway. Hence the desire to not use sk_user_data in the datapath.
The new approach is to lookup sessions in a per-net session list
without first deriving the tunnel:
* For L2TPv2, the l2tp header has separate tunnel ID and session ID
fields which can be trivially combined to make a unique 32-bit key
for per-net session lookup.
* For L2TPv3, there is no tunnel ID in the packet header, only a
session ID, which should be unique over all tunnels so can be used
as a key for per-net session lookup. However, this only works when
the L2TPv3 session ID really is unique over all tunnels. At least
one L2TPv3 application is known to use the same session ID in
different L2TPv3 UDP tunnels, relying on UDP address/ports to
distinguish them. This worked previously because sessions in UDP
tunnels were kept in a per-tunnel list. To retain support for this,
L2TPv3 session ID collisions are managed using a separate per-net
session hlist, keyed by ID and sk. When looking up a session by ID,
if there's more than one match, sk is used to find the right one.
L2TPv3 sessions in IP-encap tunnels are already looked up by session
ID in a per-net list. This work has UDP sessions also use the per-net
session list, while allowing for session ID collisions. The existing
per-tunnel hlist becomes a plain list since it is used only in
management and cleanup paths to walk a list of sessions in a given
tunnel.
For better performance, the per-net session lists use IDR. Separate
IDRs are used for L2TPv2 and L2TPv3 sessions to avoid potential key
collisions.
These changes pass l2tp regression tests and improve data forwarding
performance by about 10% in some of my test setups.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:44 +0000 (12:22 +0100)]
l2tp: replace hlist with simple list for per-tunnel session list
The per-tunnel session list is no longer used by the
datapath. However, we still need a list of sessions in the tunnel for
l2tp_session_get_nth, which is used by management code. (An
alternative might be to walk each session IDR list, matching only
sessions of a given tunnel.)
Replace the per-tunnel hlist with a per-tunnel list. In functions
which walk a list of sessions of a tunnel, walk this list instead.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:42 +0000 (12:22 +0100)]
l2tp: use IDR for all session lookups
Add generic session getter which uses IDR. Replace all users of
l2tp_tunnel_get_session which uses the per-tunnel session list to use
the generic getter.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:41 +0000 (12:22 +0100)]
l2tp: don't use sk_user_data in l2tp_udp_encap_err_recv
If UDP sockets are aliased, sk might be the wrong socket. There's no
benefit to using sk_user_data to do some checks on the associated
tunnel context. Just report the error anyway, like udp core does.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:40 +0000 (12:22 +0100)]
l2tp: refactor udp recv to lookup to not use sk_user_data
Modify UDP decap to not use the tunnel pointer which comes from the
sock's sk_user_data when parsing the L2TP header. By looking up the
destination session using only the packet contents we avoid potential
UDP 5-tuple aliasing issues which arise from depending on the socket
that received the packet.
Drop the useless error messages on short packet or on failing to find
a session since the tunnel pointer might point to a different tunnel
if multiple sockets use the same 5-tuple.
Short packets (those not big enough to contain an L2TP header) are no
longer counted in the tunnel's invalid counter because we can't derive
the tunnel until we parse the l2tp header to lookup the session.
l2tp_udp_encap_recv was a small wrapper around l2tp_udp_recv_core which
used sk_user_data to derive a tunnel pointer in an RCU-safe way. But
we no longer need the tunnel pointer, so remove that code and combine
the two functions.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:39 +0000 (12:22 +0100)]
l2tp: store l2tpv2 sessions in per-net IDR
L2TPv2 sessions are currently kept in a per-tunnel hashlist, keyed by
16-bit session_id. When handling received L2TPv2 packets, we need to
first derive the tunnel using the 16-bit tunnel_id or sock, then
lookup the session in a per-tunnel hlist using the 16-bit session_id.
We want to avoid using sk_user_data in the datapath and double lookups
on every packet. So instead, use a per-net IDR to hold L2TPv2
sessions, keyed by a 32-bit value derived from the 16-bit tunnel_id
and session_id. This will allow the L2TPv2 UDP receive datapath to
lookup a session with a single lookup without deriving the tunnel
first.
L2TPv2 sessions are held in their own IDR to avoid potential
key collisions with L2TPv3 sessions.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
James Chapman [Thu, 20 Jun 2024 11:22:38 +0000 (12:22 +0100)]
l2tp: store l2tpv3 sessions in per-net IDR
L2TPv3 sessions are currently held in one of two fixed-size hash
lists: either a per-net hashlist (IP-encap), or a per-tunnel hashlist
(UDP-encap), keyed by the L2TPv3 32-bit session_id.
In order to lookup L2TPv3 sessions in UDP-encap tunnels efficiently
without finding the tunnel first via sk_user_data, UDP sessions are
now kept in a per-net session list, keyed by session ID. Convert the
existing per-net hashlist to use an IDR for better performance when
there are many sessions and have L2TPv3 UDP sessions use the same IDR.
Although the L2TPv3 RFC states that the session ID alone identifies
the session, our implementation has allowed the same session ID to be
used in different L2TP UDP tunnels. To retain support for this, a new
per-net session hashtable is used, keyed by the sock and session
ID. If on creating a new session, a session already exists with that
ID in the IDR, the colliding sessions are added to the new hashtable
and the existing IDR entry is flagged. When looking up sessions, the
approach is to first check the IDR and if no unflagged match is found,
check the new hashtable. The sock is made available to session getters
where session ID collisions are to be considered. In this way, the new
hashtable is used only for session ID collisions so can be kept small.
For managing session removal, we need a list of colliding sessions
matching a given ID in order to update or remove the IDR entry of the
ID. This is necessary to detect session ID collisions when future
sessions are created. The list head is allocated on first collision
of a given ID and refcounted.
Signed-off-by: James Chapman <jchapman@katalix.com> Reviewed-by: Tom Parkin <tparkin@katalix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Sai Krishna [Thu, 20 Jun 2024 08:59:49 +0000 (14:29 +0530)]
octeontx2-pf: Add ucast filter count configurability via devlink.
The existing method of reserving unicast filter count leads to wasted
MCAM entries if the functionality is not used or fewer entries are used.
Furthermore, the amount of MCAM entries differs amongst Octeon SoCs.
We implemented a means to adjust the UC filter count via devlink,
allowing for better use of MCAM entries across Netdev apps.
commands:
To get the current unicast filter count
# devlink dev param show pci/0002:02:00.0 name unicast_filter_count
To change/set the unicast filter count
# devlink dev param set pci/0002:02:00.0 name unicast_filter_count
value 5 cmode runtime
Signed-off-by: Sai Krishna <saikrishnag@marvell.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Thu, 20 Jun 2024 00:27:41 +0000 (17:27 -0700)]
docs: net: document guidance of implementing the SR-IOV NDOs
New drivers were prevented from adding ndo_set_vf_* callbacks
over the last few years. This was expected to result in broader
switchdev adoption, but seems to have had little effect.
Based on recent netdev meeting there is broad support for allowing
adding those ops.
There is a problem with the current API supporting a limited number
of VFs (100+, which is less than some modern HW supports).
We can try to solve it by adding similar functionality on devlink
ports, but that'd be another API variation to maintain.
So a netlink attribute reshuffling is a more likely outcome.
Document the guidance, make it clear that the API is frozen.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Lukasz Majewski [Wed, 19 Jun 2024 14:58:09 +0000 (16:58 +0200)]
net: dsa: ksz_common: Allow only up to two HSR HW offloaded ports for KSZ9477
The KSZ9477 allows HSR in-HW offloading for any of two selected ports.
This patch adds check if one tries to use more than two ports with
HSR offloading enabled.
The problem is with RedBox configuration (HSR-SAN) - when configuring:
ip link add name hsr0 type hsr slave1 lan1 slave2 lan2 interlink lan3 \
supervision 45 version 1
The lan1 (port0) and lan2 (port1) are correctly configured as ports, which
can use HSR offloading on ksz9477.
However, when we do already have two bits set in hsr_ports, we need to
return (-ENOTSUPP), so the interlink port (lan3) would be used with
SW based HSR RedBox support.
Otherwise, I do see some strange network behavior, as some HSR frames are
visible on non-HSR network and vice versa.
This causes the switch connected to interlink port (lan3) to drop frames
and no communication is possible.
Moreover, conceptually - the interlink (i.e. HSR-SAN port - lan3/port2)
shall be only supported in software as it is also possible to use ksz9477
with only SW based HSR (i.e. port0/1 -> hsr0 with offloading, port2 ->
HSR-SAN/interlink, port4/5 -> hsr1 with SW based HSR).
Fixes: 5055cccfc2d1 ("net: hsr: Provide RedBox support (HSR-SAN)") Signed-off-by: Lukasz Majewski <lukma@denx.de> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Csókás, Bence [Wed, 19 Jun 2024 12:31:11 +0000 (14:31 +0200)]
net: fec: Fix FEC_ECR_EN1588 being cleared on link-down
FEC_ECR_EN1588 bit gets cleared after MAC reset in `fec_stop()`, which
makes all 1588 functionality shut down, and all the extended registers
disappear, on link-down, making the adapter fall back to compatibility
"dumb mode". However, some functionality needs to be retained (e.g. PPS)
even without link.
Fixes: 6605b730c061 ("FEC: Add time stamping code and a PTP hardware clock") Cc: Richard Cochran <richardcochran@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/netdev/5fa9fadc-a89d-467a-aae9-c65469ff5fe1@lunn.ch/ Signed-off-by: Csókás, Bence <csokas.bence@prolan.hu> Reviewed-by: Wei Fang <wei.fang@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Implement netdev_queue_mgmt_ops for bnxt added in [1]. This will be used
in the io_uring ZC Rx patchset to configure queues with a custom page
pool w/ a special memory provider for zero copy support.
The first two patches prep the driver, while the final patch adds the
implementation.
Any arbitrary Rx queue can be reset without affecting other queues. V2
and prior of this patchset was thought to only support resetting queues
not in the main RSS context. Upon further testing I realised moving
queues out and calling bnxt_hwrm_vnic_update() wasn't necessary.
I didn't include the netdev core API using this netdev_queue_mgmt_ops
because Mina is adding it in his devmem TCP series [2]. But I'm happy to
include it if folks want to include a user with this series.
I tested this series on BCM957504-N1100FY4 with FW 229.1.123.0. I
manually injected failures at all the places that can return an errno
and confirmed that the device/queue is never left in a broken state.
David Wei [Wed, 19 Jun 2024 06:29:31 +0000 (23:29 -0700)]
bnxt_en: implement netdev_queue_mgmt_ops
Implement netdev_queue_mgmt_ops for bnxt added in [1].
Two bnxt_rx_ring_info structs are allocated to hold the new/old queue
memory. Queue memory is copied from/to the main bp->rx_ring[idx]
bnxt_rx_ring_info.
Queue memory is pre-allocated in bnxt_queue_mem_alloc() into a clone,
and then copied into bp->rx_ring[idx] in bnxt_queue_mem_start().
Similarly, when bp->rx_ring[idx] is stopped its queue memory is copied
into a clone, and then freed later in bnxt_queue_mem_free().
I tested this patchset with netdev_rx_queue_restart(), including
inducing errors in all places that returns an error code. In all cases,
the queue is left in a good working state.
Rx queues are created/destroyed using bnxt_hwrm_rx_ring_alloc() and
bnxt_hwrm_rx_ring_free(), which issue HWRM_RING_ALLOC and HWRM_RING_FREE
commands respectively to the firmware. By the time a HWRM_RING_FREE
response is received, there won't be any more completions from that
queue.
Thanks to Somnath for helping me with this patch. With their permission
I've added them as Acked-by.
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: David Wei <dw@davidwei.uk> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>