Jakub Kicinski [Wed, 20 Aug 2025 02:56:50 +0000 (19:56 -0700)]
net: page_pool: add page_pool_get()
There is a page_pool_put() function but no get equivalent.
Having multiple references to a page pool is quite useful.
It avoids branching in create / destroy paths in drivers
which support memory providers.
Use the new helper in bnxt.
Acked-by: Jesper Dangaard Brouer <hawk@kernel.org> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250820025704.166248-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
Add PPE driver for Qualcomm IPQ9574 SoC
The PPE (packet process engine) hardware block is available in Qualcomm
IPQ chipsets that support PPE architecture, such as IPQ9574 and IPQ5332.
The PPE in the IPQ9574 SoC includes six Ethernet ports (6 GMAC and 6
XGMAC), which are used to connect with external PHY devices by PCS. The
PPE also includes packet processing offload capabilities for various
networking functions such as route and bridge flows, VLANs, different
tunnel protocols and VPN. It also includes an L2 switch function for
bridging packets among the 6 Ethernet ports and the CPU port. The CPU
port enables packet transfer between the Ethernet ports and the ARM
cores in the SoC, using the Ethernet DMA.
This patch series is the first part of a three part series that will
together enable Ethernet function for IPQ9574 SoC. While support is
initially being added for IPQ9574 SoC, the driver will be easily
extendable to enable Ethernet support for other IPQ SoC such as IPQ5332.
The driver can also be extended later for adding support for L2/L3
network offload features that the PPE can support. The functionality
to be enabled by each of the three series (to be posted sequentially)
is as below:
Part 1: The PPE patch series (this series), which enables the platform
driver, probe and initialization/configuration of different PPE hardware
blocks.
Part 2: The PPE MAC patch series, which enables the phylink operations
for the PPE Ethernet ports.
Part 3: The PPE EDMA patch series, which enables the Rx/Tx Ethernet DMA
and netdevice driver for the 6 PPE Ethernet ports.
A more detailed description of the functions enabled by part 1 is below:
1. Initialize PPE device hardware functions such as buffer management,
queue management, scheduler and clocks in order to bring up PPE
device.
2. Enable platform driver and probe functions
3. Register debugfs file to provide access to various PPE packet
counters. These statistics are recorded by the various hardware
process counters, such as port RX/TX, CPU code and hardware queue
counters.
4. A detailed introduction of PPE along with the PPE hardware diagram
in the first two patches (dt-bindings and documentation).
Below is a reference to an earlier RFC discussion with the community
about enabling Ethernet driver support for Qualcomm IPQ9574 SoC. This
writeup can help provide a higher level architectural view of various
other drivers that support the PPE such as clock and PCS drivers.
Topic: RFC: Advice on adding support for Qualcomm IPQ9574 SoC Ethernet.
https://lore.kernel.org/linux-arm-msm/d2929bd2-bc9e-4733-a89f-2a187e8bf917@quicinc.com/
Signed-off-by: Luo Jie <quic_luoj@quicinc.com>
====================
Luo Jie [Mon, 18 Aug 2025 13:14:37 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Add PPE debugfs support for PPE counters
The PPE hardware counters maintain counters for packets handled by the
various functional blocks of PPE. They help in tracing the packets
passed through PPE and debugging any packet drops.
The counters displayed by this debugfs file are ones that are common
for all Ethernet ports, and they do not include the counters that are
specific for a MAC port. Hence they cannot be displayed using ethtool.
The per-MAC counters will be supported using "ethtool -S" along with
the netdevice driver.
The PPE hardware various type counters are made available through the
debugfs files under directory "/sys/kernel/debug/ppe/".
Lei Wei [Mon, 18 Aug 2025 13:14:36 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE L2 bridge settings
Initialize the L2 bridge settings for the PPE ports to only enable
L2 frame forwarding between CPU port and PPE Ethernet ports.
The per-port L2 bridge settings are initialized as follows:
For PPE CPU port, the PPE bridge TX is enabled and FDB learning is
disabled. For PPE physical ports, the default L2 forwarding action
is initialized to forward to CPU port only.
L2/FDB learning and forwarding will not be enabled for PPE physical
ports yet, since the port's VSI (Virtual Switch Instance) and VSI
membership are not yet configured, which are required for FDB
forwarding. The VSI and FDB forwarding will later be enabled when
switchdev is enabled.
Luo Jie [Mon, 18 Aug 2025 13:14:35 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE queue to Ethernet DMA ring mapping
Configure the selected queues to map with an Ethernet DMA ring for the
packet to receive on ARM cores.
As default initialization, all queues assigned to CPU port 0 are mapped
to the EDMA ring 0. This configuration is later updated during Ethernet
DMA initialization.
Luo Jie [Mon, 18 Aug 2025 13:14:34 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE RSS hash settings
The PPE RSS hash is generated during PPE receive, based on the packet
content (3 tuples or 5 tuples) and as per the configured RSS seed. The
hash is then used to select the queue to transmit the packet to the
ARM CPU.
This patch initializes the RSS hash settings that are used to generate
the hash for the packet during PPE packet receive.
Luo Jie [Mon, 18 Aug 2025 13:14:32 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE service code settings
PPE service code is a special code (0-255) that is defined by PPE for
PPE's packet processing stages, as per the network functions required
for the packet.
For packet being sent out by ARM cores on Ethernet ports, The service
code 1 is used as the default service code. This service code is used
to bypass most of packet processing stages of the PPE before the packet
transmitted out PPE port, since the software network stack has already
processed the packet.
Luo Jie [Mon, 18 Aug 2025 13:14:31 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE queue settings
Configure unicast and multicast hardware queues for the PPE ports to
enable packet forwarding between the ports.
Each PPE port is assigned with a range of queues. The queue ID selection
for the packet is decided by the queue base and queue offset that is
configured based on the internal priority and the RSS hash value of the
packet.
Luo Jie [Mon, 18 Aug 2025 13:14:29 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE queue management for IPQ9574
QM (queue management) configurations decide the length of PPE queues
and the queue depth for these queues which are used to drop packets
in events of congestion.
There are two types of PPE queues - unicast queues (0-255) and multicast
queues (256-299). These queue types are used to forward different types
of traffic, and are configured with different lengths.
Luo Jie [Mon, 18 Aug 2025 13:14:28 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Initialize PPE buffer management for IPQ9574
The BM (Buffer Management) config controls the pause frame generated
on the PPE port. There are maximum 15 BM ports and 4 groups supported,
all BM ports are assigned to group 0 by default. The number of hardware
buffers configured for the port influence the threshold of the flow
control for that port.
Luo Jie [Mon, 18 Aug 2025 13:14:27 +0000 (21:14 +0800)]
net: ethernet: qualcomm: Add PPE driver for IPQ9574 SoC
The PPE (Packet Process Engine) hardware block is available on Qualcomm
IPQ SoC that support PPE architecture, such as IPQ9574.
The PPE in IPQ9574 includes six integrated Ethernet MAC for 6 PPE ports,
buffer management, queue management and scheduler functions. The MACs
can connect with the external PHY or switch devices using the UNIPHY PCS
block available in the SoC.
The PPE also includes various packet processing offload capabilities
such as L3 routing and L2 bridging, VLAN and tunnel processing offload.
It also includes Ethernet DMA function for transferring packets between
ARM cores and PPE Ethernet ports.
This patch adds the base source files and Makefiles for the PPE driver
such as platform driver registration, clock initialization, and PPE
reset routines.
Luo Jie [Mon, 18 Aug 2025 13:14:25 +0000 (21:14 +0800)]
dt-bindings: net: Add PPE for Qualcomm IPQ9574 SoC
The PPE (packet process engine) hardware block is available in Qualcomm
IPQ chipsets that support PPE architecture, such as IPQ9574. The PPE in
the IPQ9574 SoC includes six Ethernet ports (6 GMAC and 6 XGMAC), which
are used to connect with external PHY devices by PCS. It includes an L2
switch function for bridging packets among the 6 Ethernet ports and the
CPU port. The CPU port enables packet transfer between the Ethernet ports
and the ARM cores in the SoC, using the Ethernet DMA.
The PPE also includes packet processing offload capabilities for various
networking functions such as route and bridge flows, VLANs, different
tunnel protocols and VPN.
The PPE switch is modeled according to the Ethernet switch schema, with
additional properties defined for the switch node for interrupts, clocks,
resets, interconnects and Ethernet DMA. The switch port node is extended
with additional properties for clocks and resets.
====================
net: phy: micrel: Add support for lan8842
Add support for LAN8842 which supports industry-standard SGMII.
While add this the first 3 patches in the series cleans more the
driver, they should not introduce any functional changes.
====================
Horatiu Vultur [Mon, 18 Aug 2025 07:51:21 +0000 (09:51 +0200)]
net: phy: micrel: Add support for lan8842
The LAN8842 is a low-power, single port triple-speed (10BASE-T/ 100BASE-TX/
1000BASE-T) ethernet physical layer transceiver (PHY) that supports
transmission and reception of data on standard CAT-5, as well as CAT-5e and
CAT-6, Unshielded Twisted Pair (UTP) cables.
The LAN8842 supports industry-standard SGMII (Serial Gigabit Media
Independent Interface) providing chip-to-chip connection to a Gigabit
Ethernet MAC using a single serialized link (differential pair) in each
direction.
There are 2 variants of the lan8842. The one that supports timestamping
(lan8842) and one that doesn't have timestamping (lan8832).
Horatiu Vultur [Mon, 18 Aug 2025 07:51:20 +0000 (09:51 +0200)]
net: phy: micrel: Replace hardcoded pages with defines
The functions lan_*_page_reg gets as a second parameter the page
where the register is. In all the functions the page was hardcoded.
Replace the hardcoded values with defines to make it more clear
what are those parameters.
As the name suggests this function modifies the register in an
extended page. It has the same parameters as phy_modify_mmd.
This function was introduce because there are many places in the
code where the registers was read then the value was modified and
written back. So replace all this code with this function to make
it clear.
Paolo Abeni [Thu, 21 Aug 2025 08:18:46 +0000 (10:18 +0200)]
Merge tag 'mlx5-next-vhca-id' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
Saeed Mahameed says:
====================
mlx5-next-vhca-id
A preparation patchset for adjacent function vports.
Adjacent functions can delegate their SR-IOV VFs to sibling PFs,
allowing for more flexible and scalable management in multi-host and
ECPF-to-host scenarios. Adjacent vports can be managed by the management
PF via their unique vhca id and can't be managed by function index as the
index can conflict with the local vports/vfs.
This series provides:
- Use the cached vcha id instead of querying it every time from fw
- Query hca cap using vhca id instead of function id when FW supports it
- Add HW capabilities and required definitions for adjacent function vports
* tag 'mlx5-next-vhca-id' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux:
{rdma,net}/mlx5: export mlx5_vport_get_vhca_id
net/mlx5: E-Switch, Set/Query hca cap via vhca id
net/mlx5: E-Switch, Cache vport vhca id on first cap query
net/mlx5: mlx5_ifc, Add hardware definitions needed for adjacent vports
====================
net: openvswitch: Use for_each_cpu() where appropriate
Due to legacy reasons, openswitch code opencodes for_each_cpu() to make
sure that CPU0 is always considered.
Since commit c4b2bf6b4a35 ("openvswitch: Optimize operations for OvS
flow_stats."), the corresponding flow->cpu_used_mask is initialized
such that CPU0 is explicitly set.
So, switch the code to using plain for_each_cpu().
Jakub Kicinski [Thu, 21 Aug 2025 02:34:11 +0000 (19:34 -0700)]
Merge branch 'bnxt_en-updates-for-net-next'
Michael Chan says:
====================
bnxt_en: Updates for net-next
The first patch is the FW interface update, followed by 3 patches to
support the expanded pcie v2 structure for ethtool -d. The last patch
adds a Hyper-V PCI ID for the 5760X chips (Thor2).
Shruti Parab [Tue, 19 Aug 2025 16:39:17 +0000 (09:39 -0700)]
bnxt_en: Add pcie_stat_len to struct bp
Add this length field to capture the length of the pcie stats structure
supported by the FW. This length will be determined in
bnxt_ethtool_init(). The minimum of this FW length and the length
known to the driver will determine the actual ethtool -d length.
Suggested-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by: Andy Gospodarek <andrew.gospodarek@broadcom.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Shruti Parab <shruti.parab@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Link: https://patch.msgid.link/20250819163919.104075-4-michael.chan@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Shruti Parab [Tue, 19 Aug 2025 16:39:16 +0000 (09:39 -0700)]
bnxt_en: Refactor bnxt_get_regs()
Separate the code that sends the FW message to retrieve pcie stats into
a new helper function. The caller of the helper will call hwrm_req_hold()
beforehand and so the caller will call hwrm_req_drop() in all cases
afterwards. This helper will be useful when adding the support for the
larger struct pcie_ctx_hw_stats_v2.
Signed-off-by: Shruti Parab <shruti.parab@broadcom.com> Signed-off-by: Michael Chan <michael.chan@broadcom.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Link: https://patch.msgid.link/20250819163919.104075-3-michael.chan@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Hangbin Liu [Tue, 19 Aug 2025 07:33:48 +0000 (07:33 +0000)]
selftests: net: bpf_offload: print loaded programs on mismatch
The test sometimes fails due to an unexpected number of loaded programs. e.g
FAIL: 2 BPF programs loaded, expected 1
File "/usr/libexec/kselftests/net/./bpf_offload.py", line 940, in <module>
progs = bpftool_prog_list(expected=1)
File "/usr/libexec/kselftests/net/./bpf_offload.py", line 187, in bpftool_prog_list
fail(True, "%d BPF programs loaded, expected %d" %
File "/usr/libexec/kselftests/net/./bpf_offload.py", line 89, in fail
tb = "".join(traceback.extract_stack().format())
However, the logs do not show which programs were actually loaded, making it
difficult to debug the failure.
Add printing of the loaded programs when a mismatch is detected to help
troubleshoot such errors. The list is printed on a new line to avoid breaking
the current log format.
Alex Tran [Tue, 19 Aug 2025 02:52:27 +0000 (19:52 -0700)]
selftests/net/socket.c: removed warnings from unused returns
socket.c: In function ‘run_tests’:
socket.c:59:25: warning: ignoring return value of ‘strerror_r’ \
declared with attribute ‘warn_unused_result’ [-Wunused-result]
59 | strerror_r(-s->expect, err_string1, ERR_STRING_SZ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
socket.c:60:25: warning: ignoring return value of ‘strerror_r’ \
declared with attribute ‘warn_unused_result’ [-Wunused-result]
60 | strerror_r(errno, err_string2, ERR_STRING_SZ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
socket.c:73:33: warning: ignoring return value of ‘strerror_r’ \
declared with attribute ‘warn_unused_result’ [-Wunused-result]
73 | strerror_r(errno, err_string1, ERR_STRING_SZ);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
changelog:
v2
- const char* messages and fixed patch warnings of max 75 chars
per line
Pengtao He [Tue, 19 Aug 2025 02:15:51 +0000 (10:15 +0800)]
net: avoid one loop iteration in __skb_splice_bits
If *len is equal to 0 at the beginning of __splice_segment
it returns true directly. But when decreasing *len from
a positive number to 0 in __splice_segment, it returns false.
The __skb_splice_bits needs to call __splice_segment again.
Recheck *len if it changes, return true in time.
Reduce unnecessary calls to __splice_segment.
====================
sctp: Convert to use crypto lib, and upgrade cookie auth
This series converts SCTP chunk and cookie authentication to use the
crypto library API instead of crypto_shash. This is much simpler (the
diffstat should speak for itself), and also faster too. In addition,
this series upgrades the cookie authentication to use HMAC-SHA256.
I've tested that kernels with this series applied can continue to
communicate using SCTP with older ones, in either direction, using any
choice of None, HMAC-SHA1, or HMAC-SHA256 chunk authentication.
====================
Eric Biggers [Mon, 18 Aug 2025 20:54:26 +0000 (13:54 -0700)]
sctp: Stop accepting md5 and sha1 for net.sctp.cookie_hmac_alg
The upgrade of the cookie authentication algorithm to HMAC-SHA256 kept
some backwards compatibility for the net.sctp.cookie_hmac_alg sysctl by
still accepting the values 'md5' and 'sha1'. Those algorithms are no
longer actually used, but rather those values were just treated as
requests to enable cookie authentication.
As requested at
https://lore.kernel.org/netdev/CADvbK_fmCRARc8VznH8cQa-QKaCOQZ6yFbF=1-VDK=zRqv_cXw@mail.gmail.com/
and https://lore.kernel.org/netdev/20250818084345.708ac796@kernel.org/ ,
go further and start rejecting 'md5' and 'sha1' completely.
Eric Biggers [Mon, 18 Aug 2025 20:54:25 +0000 (13:54 -0700)]
sctp: Convert cookie authentication to use HMAC-SHA256
Convert SCTP cookies to use HMAC-SHA256, instead of the previous choice
of the legacy algorithms HMAC-MD5 and HMAC-SHA1. Simplify and optimize
the code by using the HMAC-SHA256 library instead of crypto_shash, and
by preparing the HMAC key when it is generated instead of per-operation.
This doesn't break compatibility, since the cookie format is an
implementation detail, not part of the SCTP protocol itself.
Note that the cookie size doesn't change either. The HMAC field was
already 32 bytes, even though previously at most 20 bytes were actually
compared. 32 bytes exactly fits an untruncated HMAC-SHA256 value. So,
although we could safely truncate the MAC to something slightly shorter,
for now just keep the cookie size the same.
I also considered SipHash, but that would generate only 8-byte MACs. An
8-byte MAC *might* suffice here. However, there's quite a lot of
information in the SCTP cookies: more than in TCP SYN cookies. So
absent an analysis that occasional forgeries of all that information is
okay in SCTP, I errored on the side of caution.
Remove HMAC-MD5 and HMAC-SHA1 as options, since the new HMAC-SHA256
option is just better. It's faster as well as more secure. For
example, benchmarking on x86_64, cookie authentication is now nearly 3x
as fast as the previous default choice and implementation of HMAC-MD5.
Also just make the kernel always support cookie authentication if SCTP
is supported at all, rather than making it optional in the build. (It
was sort of optional before, but it didn't really work properly. E.g.,
a kernel with CONFIG_SCTP_COOKIE_HMAC_MD5=n still supported HMAC-MD5
cookie authentication if CONFIG_CRYPTO_HMAC and CONFIG_CRYPTO_MD5
happened to be enabled in the kconfig for other reasons.)
Eric Biggers [Mon, 18 Aug 2025 20:54:24 +0000 (13:54 -0700)]
sctp: Use HMAC-SHA1 and HMAC-SHA256 library for chunk authentication
For SCTP chunk authentication, use the HMAC-SHA1 and HMAC-SHA256 library
functions instead of crypto_shash. This is simpler and faster. There's
no longer any need to pre-allocate 'crypto_shash' objects; the SCTP code
now simply calls into the HMAC code directly.
As part of this, make SCTP always support both HMAC-SHA1 and
HMAC-SHA256. Previously, it only guaranteed support for HMAC-SHA1.
However, HMAC-SHA256 tended to be supported too anyway, as it was
supported if CONFIG_CRYPTO_SHA256 was enabled elsewhere in the kconfig.
Eric Biggers [Mon, 18 Aug 2025 20:54:22 +0000 (13:54 -0700)]
selftests: net: Explicitly enable CONFIG_CRYPTO_SHA1 for IPsec
xfrm_policy.sh, nft_flowtable.sh, and vrf-xfrm-tests.sh use 'ip xfrm'
with SHA-1, either 'auth sha1' or 'auth-trunc hmac(sha1)'. That
requires CONFIG_CRYPTO_SHA1, which CONFIG_INET_ESP intentionally doesn't
select (as per its help text). Previously, the config for these tests
relied on CONFIG_CRYPTO_SHA1 being selected by the unrelated option
CONFIG_IP_SCTP. Since CONFIG_IP_SCTP is being changed to no longer do
that, instead add CONFIG_CRYPTO_SHA1 to the configs explicitly.
Reported-by: Paolo Abeni <pabeni@redhat.com> Closes: https://lore.kernel.org/r/766e4508-aaba-4cdc-92b4-e116e52ae13b@redhat.com Suggested-by: Florian Westphal <fw@strlen.de> Acked-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: Eric Biggers <ebiggers@kernel.org> Link: https://patch.msgid.link/20250818205426.30222-2-ebiggers@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Kuniyuki Iwashima [Fri, 15 Aug 2025 20:16:14 +0000 (20:16 +0000)]
net-memcg: Introduce mem_cgroup_from_sk().
We will store a flag in the lowest bit of sk->sk_memcg.
Then, directly dereferencing sk->sk_memcg will be illegal, and we
do not want to allow touching the raw sk->sk_memcg in many places.
Let's introduce mem_cgroup_from_sk().
Other places accessing the raw sk->sk_memcg will be converted later.
Note that we cannot define the helper as an inline function in
memcontrol.h as we cannot access any fields of struct sock there
due to circular dependency, so it is placed in sock.h.
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Roman Gushchin <roman.gushchin@linux.dev> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Link: https://patch.msgid.link/20250815201712.1745332-7-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Kuniyuki Iwashima [Fri, 15 Aug 2025 20:16:12 +0000 (20:16 +0000)]
net: Call trace_sock_exceed_buf_limit() for memcg failure with SK_MEM_RECV.
Initially, trace_sock_exceed_buf_limit() was invoked when
__sk_mem_raise_allocated() failed due to the memcg limit or the
global limit.
However, commit d6f19938eb031 ("net: expose sk wmem in
sock_exceed_buf_limit tracepoint") somehow suppressed the event
only when memcg failed to charge for SK_MEM_RECV, although the
memcg failure for SK_MEM_SEND still triggers the event.
Let's restore the event for SK_MEM_RECV.
Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev> Link: https://patch.msgid.link/20250815201712.1745332-5-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
stmmac: stop silently dropping bad checksum packets
this series reworks how stmmac handles receive checksum offload
(CoE) errors on dwmac4.
At present, when CoE is enabled, the hardware silently discards any
frame that fails checksum validation. These packets never reach the
driver and are not accounted in the generic drop statistics. They are
only visible in the stmmac-specific counters as "payload error" or
"header error" packets, which makes it harder to debug or monitor
network issues.
Following discussion [1], the driver is reworked to propagate checksum
error information up to the stack. With these changes, CoE stays
enabled, but frames that fail hardware validation are no longer dropped
in hardware. Instead, the driver marks them with CHECKSUM_NONE so the
network stack can validate, drop, and properly account them in the
standard drop statistics.
Oleksij Rempel [Mon, 18 Aug 2025 09:02:17 +0000 (11:02 +0200)]
net: stmmac: dwmac4: stop hardware from dropping checksum-error packets
Tell the MAC not to discard frames that fail TCP/IP checksum
validation.
By default, when the hardware checksum engine (CoE) is enabled,
dwmac4 silently drops any packet where the offload engine detects
a checksum error. These frames are not reported to the driver and
are not counted in any statistics as dropped packets.
Set the MTL_OP_MODE_DIS_TCP_EF bit when initializing the Rx channel so
that all packets are delivered, even if they failed hardware checksum
validation. CoE remains enabled, but instead of dropping such frames,
the driver propagates the error status and marks the skb with
CHECKSUM_NONE. This allows the stack to verify and drop the packet
while updating statistics.
Oleksij Rempel [Mon, 18 Aug 2025 09:02:16 +0000 (11:02 +0200)]
net: stmmac: dwmac4: report Rx checksum errors in status
Propagate hardware checksum failures from the descriptor parser to the
caller.
Currently, dwmac4_wrback_get_rx_status() updates stats when the Rx
descriptor signals an IP header or payload checksum error, but it does
not reflect this in its return value. The higher-level stmmac_rx() code
therefore cannot tell that hardware checksum validation failed.
Set the csum_none flag in the returned status when either
RDES1_IP_HDR_ERROR or RDES1_IP_PAYLOAD_ERROR is present. This aligns
dwmac4 with enh_desc_coe_rdes0() and lets stmmac_rx() mark the skb as
CHECKSUM_NONE for software verification.
This is a preparatory step for disabling the hardware filter that drops
frames which do not pass checksum validation.
The stmmac_rx function would previously set skb->ip_summed to
CHECKSUM_UNNECESSARY if hardware checksum offload (CoE) was enabled
and the packet was of a known IP ethertype.
However, this logic failed to check if the hardware had actually
reported a checksum error. The hardware status, indicating a header or
payload checksum failure, was being ignored at this stage. This could
cause corrupt packets to be passed up the network stack as valid.
This patch corrects the logic by checking the `csum_none` status flag,
which is set when the hardware reports a checksum error. If this flag
is set, skb->ip_summed is now correctly set to CHECKSUM_NONE,
ensuring the kernel's network stack will perform its own validation and
properly handle the corrupt packet.
====================
There are a cleancode and a parameter check for hns3 driver
This patchset includes:
1. a parameter check omitted from fix code in net branch
https://lore.kernel.org/all/20250723072900.GV2459@horms.kernel.org/
2. a small clean code
====================
Markus Stockhausen [Fri, 15 Aug 2025 08:20:09 +0000 (04:20 -0400)]
net: phy: realtek: enable serdes option mode for RTL8226-CG
The RTL8226-CG can make use of the serdes option mode feature to
dynamically switch between SGMII and 2500base-X. From what is
known the setup sequence is much simpler with no magic values.
Convert the exiting config_init() into a helper that configures
the PHY depending on generation 1 or 2. Call the helper from two
separated new config_init() functions.
Finally convert the phy_driver specs of the RTL8226-CG to make
use of the new configuration and switch over to the extended
read_status() function to dynamically change the interface
according to the serdes mode.
Remark! The logic could be simpler if the serdes mode could be
set before all other generation 2 magic values. Due to missing
RTL8221B test hardware the mmd command order was kept.
Miguel GarcÃa [Mon, 18 Aug 2025 22:02:03 +0000 (00:02 +0200)]
ipv6: ip6_gre: replace strcpy with strscpy for tunnel name
Replace the strcpy() call that copies the device name into
tunnel->parms.name with strscpy(), to avoid potential overflow
and guarantee NULL termination. This uses the two-argument
form of strscpy(), where the destination size is inferred
from the array type.
Destination is tunnel->parms.name (size IFNAMSIZ).
Tested in QEMU (Alpine rootfs):
- Created IPv6 GRE tunnels over loopback
- Assigned overlay IPv6 addresses
- Verified bidirectional ping through the tunnel
- Changed tunnel parameters at runtime (`ip -6 tunnel change`)
====================
net: Convert to skb_dstref_steal and skb_dstref_restore
To diagnose and prevent issues similar to [0], emit warning
(CONFIG_DEBUG_NET) from skb_dst_set and skb_dst_set_noref when
overwriting non-null reference-counted entry. Two new helpers
are added to handle special cases where the entry needs to be
reset and restored: skb_dstref_steal/skb_dstref_restore. The bulk of
the patches in the series converts manual _skb_refst manipulations
to these new helpers.
Stanislav Fomichev [Mon, 18 Aug 2025 15:40:31 +0000 (08:40 -0700)]
chtls: Convert to skb_dst_reset
Going forward skb_dst_set will assert that skb dst_entry
is empty during skb_dst_set. skb_dstref_steal is added to reset
existing entry without doing refcnt. Chelsio driver is
doing extra dst management via skb_dst_set(NULL). Replace
these calls with skb_dstref_steal.
Stanislav Fomichev [Mon, 18 Aug 2025 15:40:29 +0000 (08:40 -0700)]
net: Switch to skb_dstref_steal/skb_dstref_restore for ip_route_input callers
Going forward skb_dst_set will assert that skb dst_entry
is empty during skb_dst_set. skb_dstref_steal is added to reset
existing entry without doing refcnt. skb_dstref_restore should
be used to restore the previous entry. Convert icmp_route_lookup
and ip_options_rcv_srr to these helpers. Add extra call to
skb_dstref_reset to icmp_route_lookup to clear the ip_route_input
entry.
Stanislav Fomichev [Mon, 18 Aug 2025 15:40:28 +0000 (08:40 -0700)]
netfilter: Switch to skb_dstref_steal to clear dst_entry
Going forward skb_dst_set will assert that skb dst_entry
is empty during skb_dst_set. skb_dstref_steal is added to reset
existing entry without doing refcnt. Switch to skb_dstref_steal
in ip[6]_route_me_harder and add a comment on why it's safe
to skip skb_dstref_restore.
Stanislav Fomichev [Mon, 18 Aug 2025 15:40:27 +0000 (08:40 -0700)]
xfrm: Switch to skb_dstref_steal to clear dst_entry
Going forward skb_dst_set will assert that skb dst_entry
is empty during skb_dst_set. skb_dstref_steal is added to reset
existing entry without doing refcnt. Switch to skb_dstref_steal
in __xfrm_route_forward and add a comment on why it's safe
to skip skb_dstref_restore.
Stanislav Fomichev [Mon, 18 Aug 2025 15:40:26 +0000 (08:40 -0700)]
net: Add skb_dstref_steal and skb_dstref_restore
Going forward skb_dst_set will assert that skb dst_entry
is empty during skb_dst_set to prevent potential leaks. There
are few places that still manually manage dst_entry not using
the helpers. Convert them to the following new helpers:
- skb_dstref_steal that resets dst_entry and returns previous dst_entry
value
- skb_dstref_restore that restores dst_entry previously reset via
skb_dstref_steal
====================
net: Speedup some nexthop handling when having A LOT of nexthops
Configuring a very large number of nexthops is fairly possible within
a reasonable time-frame. But, certain netlink commands can become
extremely slow.
This series addresses some of these, namely dumping and removing
nexthops.
Christoph Paasch [Sat, 16 Aug 2025 23:12:49 +0000 (16:12 -0700)]
net: When removing nexthops, don't call synchronize_net if it is not necessary
When removing a nexthop, commit 90f33bffa382 ("nexthops: don't modify published nexthop groups") added a
call to synchronize_rcu() (later changed to _net()) to make sure
everyone sees the new nexthop-group before the rtnl-lock is released.
When one wants to delete a large number of groups and nexthops, it is
fastest to first flush the groups (ip nexthop flush groups) and then
flush the nexthops themselves (ip -6 nexthop flush). As that way the
groups don't need to be rebalanced.
However, `ip -6 nexthop flush` will still take a long time if there is
a very large number of nexthops because of the call to
synchronize_net(). Now, if there are no more groups, there is no point
in calling synchronize_net(). So, let's skip that entirely by checking
if nh->grp_list is empty.
This gives us a nice speedup:
BEFORE:
=======
$ time sudo ip -6 nexthop flush
Dump was interrupted and may be inconsistent.
Flushed 2097152 nexthops
real 1m45.345s
user 0m0.001s
sys 0m0.005s
$ time sudo ip -6 nexthop flush
Dump was interrupted and may be inconsistent.
Flushed 4194304 nexthops
real 3m10.430s
user 0m0.002s
sys 0m0.004s
AFTER:
======
$ time sudo ip -6 nexthop flush
Dump was interrupted and may be inconsistent.
Flushed 2097152 nexthops
real 0m17.545s
user 0m0.003s
sys 0m0.003s
$ time sudo ip -6 nexthop flush
Dump was interrupted and may be inconsistent.
Flushed 4194304 nexthops
real 0m35.823s
user 0m0.002s
sys 0m0.004s
Signed-off-by: Christoph Paasch <cpaasch@openai.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://patch.msgid.link/20250816-nexthop_dump-v2-2-491da3462118@openai.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Christoph Paasch [Sat, 16 Aug 2025 23:12:48 +0000 (16:12 -0700)]
net: Make nexthop-dumps scale linearly with the number of nexthops
When we have a (very) large number of nexthops, they do not fit within a
single message. rtm_dump_walk_nexthops() thus will be called repeatedly
and ctx->idx is used to avoid dumping the same nexthops again.
The approach in which we avoid dumping the same nexthops is by basically
walking the entire nexthop rb-tree from the left-most node until we find
a node whose id is >= s_idx. That does not scale well.
Instead of this inefficient approach, rather go directly through the
tree to the nexthop that should be dumped (the one whose nh_id >=
s_idx). This allows us to find the relevant node in O(log(n)).
We have quite a nice improvement with this:
Before:
=======
--> ~1M nexthops:
$ time ~/libnl/src/nl-nh-list | wc -l 1050624
real 0m21.080s
user 0m0.666s
sys 0m20.384s
--> ~2M nexthops:
$ time ~/libnl/src/nl-nh-list | wc -l 2101248
real 1m51.649s
user 0m1.540s
sys 1m49.908s
After:
======
--> ~1M nexthops:
$ time ~/libnl/src/nl-nh-list | wc -l 1050624
real 0m1.157s
user 0m0.926s
sys 0m0.259s
--> ~2M nexthops:
$ time ~/libnl/src/nl-nh-list | wc -l 2101248
real 0m2.763s
user 0m2.042s
sys 0m0.776s
Signed-off-by: Christoph Paasch <cpaasch@openai.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Reviewed-by: Nikolay Aleksandrov <razor@blackwall.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://patch.msgid.link/20250816-nexthop_dump-v2-1-491da3462118@openai.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Fri, 15 Aug 2025 23:15:13 +0000 (16:15 -0700)]
selftests: drv-net: ncdevmem: make configure_channels() support combined channels
ncdevmem tests that the kernel correctly rejects attempts
to deactivate queues with MPs bound.
Make the configure_channels() test support combined channels.
Currently it tries to set the queue counts to rx N tx N-1,
which only makes sense for devices which have IRQs per ring
type. Most modern devices used combined IRQs/channels with
both Rx and Tx queues. Since the math is total Rx == combined+Rx
setting Rx when combined is non-zero will be increasing the total
queue count, not decreasing as the test intends.
Note that the test would previously also try to set the Tx
ring count to Rx - 1, for some reason. Which would be 0
if the device has only 2 queues configured.
With this change (device with 2 queues):
setting channel count rx:1 tx:1
YNL set channels: Kernel error: 'requested channel counts are too low for existing memory provider setting (2)'
Jakub Kicinski [Fri, 15 Aug 2025 22:41:00 +0000 (15:41 -0700)]
selftests: drv-net: tso: increase the retransmit threshold
We see quite a few flakes during the TSO test against virtualized
devices in NIPA. There's often 10-30 retransmissions during the
test. Sometimes as many as 100. Set the retransmission threshold
at 1/4th of the wire frame target.
Chaoyi Chen [Fri, 15 Aug 2025 02:35:15 +0000 (10:35 +0800)]
net: ethernet: stmmac: dwmac-rk: Make the clk_phy could be used for external phy
For external phy, clk_phy should be optional, and some external phy
need the clock input from clk_phy. This patch adds support for setting
clk_phy for external phy.
Dipayaan Roy [Thu, 14 Aug 2025 14:04:10 +0000 (07:04 -0700)]
net: mana: Use page pool fragments for RX buffers instead of full pages to improve memory efficiency.
This patch enhances RX buffer handling in the mana driver by allocating
pages from a page pool and slicing them into MTU-sized fragments, rather
than dedicating a full page per packet. This approach is especially
beneficial on systems with large base page sizes like 64KB.
Key improvements:
- Proper integration of page pool for RX buffer allocations.
- MTU-sized buffer slicing to improve memory utilization.
- Reduce overall per Rx queue memory footprint.
- Automatic fallback to full-page buffers when:
* Jumbo frames are enabled (MTU > PAGE_SIZE / 2).
* The XDP path is active, to avoid complexities with fragment reuse.
Testing on VMs with 64KB pages shows around 200% throughput improvement.
Memory efficiency is significantly improved due to reduced wastage in page
allocations. Example: We are now able to fit 35 rx buffers in a single 64kb
page for MTU size of 1500, instead of 1 rx buffer per page previously.
Tested:
- iperf3, iperf2, and nttcp benchmarks.
- Jumbo frames with MTU 9000.
- Native XDP programs (XDP_PASS, XDP_DROP, XDP_TX, XDP_REDIRECT) for
testing the XDP path in driver.
- Memory leak detection (kmemleak).
- Driver load/unload, reboot, and stress scenarios.
Lorenzo Bianconi [Thu, 14 Aug 2025 07:51:16 +0000 (09:51 +0200)]
net: airoha: Add wlan flowtable TX offload
Introduce support to offload the traffic received on the ethernet NIC
and forwarded to the wireless one using HW Packet Processor Engine (PPE)
capabilities.
====================
net: macb: Add TAPRIO traffic scheduling support
Implement Time-Aware Traffic Scheduling (TAPRIO) offload support
for Cadence MACB/GEM ethernet controllers to enable IEEE 802.1Qbv
compliant time-sensitive networking (TSN) capabilities.
Key features implemented:
- Complete TAPRIO qdisc offload infrastructure with TC_SETUP_QDISC_TAPRIO
- Hardware-accelerated time-based gate control for multiple queues
- Enhanced Scheduled Traffic (ENST) register configuration and management
- Gate state scheduling with configurable start times, on/off intervals
- Support for cycle-time based traffic scheduling with validation
- Hardware capability detection via MACB_CAPS_QBV flag
- Robust error handling and parameter validation
- Queue-specific timing register programming
(ENST_START_TIME, ENST_ON_TIME, ENST_OFF_TIME)
Changes include:
- Add enst_ns_to_hw_units(): Converts nanoseconds to hardware units
- Add enst_max_hw_interval(): Returns max interval for given speed
- Add macb_taprio_setup_replace() for TAPRIO configuration
- Add macb_taprio_destroy() for cleanup and reset
- Add macb_setup_tc() as TC offload entry point
- Enable NETIF_F_HW_TC feature for QBV-capable hardware
- Add ENST register offsets to queue configuration
The implementation validates timing constraints against hardware limits,
supports per-queue gate mask configuration, and provides comprehensive
logging for debugging and monitoring. Hardware registers are programmed
atomically with proper locking to ensure consistent state.
Tested on Xilinx Versal platforms with QBV-capable MACB controllers.
Vineeth Karumanchi [Thu, 14 Aug 2025 07:10:58 +0000 (12:40 +0530)]
net: macb: Add capability-based QBV detection and Versal support
The 'exclude_qbv' bit in the designcfg_debug1 register varies across
MACB/GEM IP revisions, making direct probing unreliable for detecting
QBV support. This patch introduces a capability-based approach for
consistent QBV feature identification across the IP family.
Platform support updates:
- Establish foundation for QBV detection in TAPRIO implementation
- Enable MACB_CAPS_QBV for Xilinx Versal platform configuration
- Fix capability line wrapping, ensuring code stays within 80 columns
====================
eth: fbnic: Add XDP support for fbnic
This patch series introduces basic XDP support for fbnic. To enable this,
it also includes preparatory changes such as making the HDS threshold
configurable via ethtool, updating headroom for fbnic, tracking
frag state in shinfo, and prefetching the first cacheline of data.
Mohsin Bashir [Wed, 13 Aug 2025 22:13:18 +0000 (15:13 -0700)]
eth: fbnic: Collect packet statistics for XDP
Add support for XDP statistics collection and reporting via rtnl_link
and netdev_queue API.
For XDP programs without frags support, fbnic requires MTU to be less
than the HDS threshold. If an over-sized frame is received, the frame
is dropped and recorded as rx_length_errors reported via ip stats to
highlight that this is an error.
Mohsin Bashir [Wed, 13 Aug 2025 22:13:16 +0000 (15:13 -0700)]
eth: fbnic: Add support for XDP queues
Add support for allocating XDP_TX queues and configuring ring support.
FBNIC has been designed with XDP support in mind. Each Tx queue has 2
submission queues and one completion queue, with the expectation that
one of the submission queues will be used by the stack, and the other
by XDP. XDP queues are populated by XDP_TX and start from index 128
in the TX queue array.
The support for XDP_TX is added in the next patch.
Mohsin Bashir [Wed, 13 Aug 2025 22:13:15 +0000 (15:13 -0700)]
eth: fbnic: Add XDP pass, drop, abort support
Add basic support for attaching an XDP program to the device and support
for PASS/DROP/ABORT actions. In fbnic, buffers are always mapped as
DMA_BIDIRECTIONAL.
The BPF program pointer can be read either on a per-packet basis or on a
per-NAPI poll basis. Both approaches are functionally equivalent, in the
current code. Stick to per-packet as it limits number of arguments we need
to pass around.
On the XDP hot path, check that packets with fragments are only allowed
when multi-buffer support is enabled for the XDP program. Ideally, this
check should not be necessary because ndo_bpf verifies that for XDP
programs without multi-buff support, MTU is less than the hds_thresh.
However, the MTU currently does not enforce the receive size which would
require cleaning up the data path and bouncing the link. For practical
reasons, prioritize the ability to enter and exit BPF mode with different
MTU sizes without requiring a full reconfig.
Testing:
Hook a simple XDP program that passes all the packets destined for a
specific port
iperf3 -c 192.168.1.10 -P 5 -p 12345
Connecting to host 192.168.1.10, port 12345
[ 5] local 192.168.1.9 port 46702 connected to 192.168.1.10 port 12345
[ ID] Interval Transfer Bitrate Retr Cwnd
- - - - - - - - - - - - - - - - - - - - - - - - -
[SUM] 1.00-2.00 sec 3.86 GBytes 33.2 Gbits/sec 0
XDP_DROP:
Hook an XDP program that drops packets destined for a specific port
- Validate XDP attachment failure when HDS is low
~] ethtool -G eth0 hds-thresh 512
~] sudo ip link set eth0 xdpdrv obj xdp_pass_12345.o sec xdp
~] Error: fbnic: MTU too high, or HDS threshold is too low for single
buffer XDP.
- Validate successful XDP attachment when HDS threshold is appropriate
~] ethtool -G eth0 hds-thresh 1536
~] sudo ip link set eth0 xdpdrv obj xdp_pass_12345.o sec xdp
- Validate when the XDP program is attached, changing HDS thresh to a
lower value fails
~] ethtool -G eth0 hds-thresh 512
~] netlink error: fbnic: Use higher HDS threshold or multi-buf capable
program
- Validate HDS thresh does not matter when xdp frags support is
available
~] ethtool -G eth0 hds-thresh 512
~] sudo ip link set eth0 xdpdrv obj xdp_pass_mb_12345.o sec xdp.frags
Mohsin Bashir [Wed, 13 Aug 2025 22:13:13 +0000 (15:13 -0700)]
eth: fbnic: Use shinfo to track frags state on Rx
Remove local fields that track frags state and instead store this
information directly in the shinfo struct. This change is necessary
because the current implementation can lead to inaccuracies in certain
scenarios, such as when using XDP multi-buff support. Specifically, the
XDP program may update nr_frags without updating the local variables,
resulting in an inconsistent state.
Mohsin Bashir [Wed, 13 Aug 2025 22:13:12 +0000 (15:13 -0700)]
eth: fbnic: Update Headroom
Fbnic currently reserves a minimum of 64B headroom, but this is
insufficient for inserting additional headers (e.g., IPV6) via XDP, as
only 24 bytes are available for adjustment. To address this limitation,
increase the headroom to a larger value while ensuring better page use.
Although the resulting headroom (192B) is smaller than the recommended
value (256B), forcing the headroom to 256B would require aligning to
256B (as opposed to the current 128B), which can push the max headroom
to 511B.
Mohsin Bashir [Wed, 13 Aug 2025 22:13:11 +0000 (15:13 -0700)]
eth: fbnic: Add support for HDS configuration
Add support for configuring the header data split threshold.
For fbnic, the tcp data split support is enabled all the time.
Fbnic supports a maximum buffer size of 4KB. However, the reservation
for the headroom, tailroom, and padding reduce the max header size
accordingly.
ethtool_hds -g eth0
Ring parameters for eth0:
Pre-set maximums:
...
HDS thresh: 3584
Current hardware settings:
...
HDS thresh: 1536
Verify hds tests in ksft-net-drv are passing
ksft-net-drv]# ./drivers/net/hds.py
TAP version 13
1..13
ok 1 hds.get_hds
ok 2 hds.get_hds_thresh
ok 3 hds.set_hds_disable # SKIP disabling of HDS not supported by ...
...
...
ok 12 hds.ioctl_set_xdp
ok 13 hds.ioctl_enabled_set_xdp
\# Totals: pass:12 fail:0 xfail:0 xpass:0 skip:1 error:0
Jakub Kicinski [Tue, 19 Aug 2025 01:10:14 +0000 (18:10 -0700)]
Merge branch 'net-stmmac-eee-and-wol-cleanups'
Russell King says:
====================
net: stmmac: EEE and WoL cleanups
This series contains a series of cleanup patches for the EEE and WoL
code in stmmac, prompted by issues raised during the last three weeks.
====================
Russell King (Oracle) [Fri, 15 Aug 2025 11:32:21 +0000 (12:32 +0100)]
net: stmmac: explain the phylink_speed_down() call in stmmac_release()
The call to phylink_speed_down() looks odd on the face of it. Add a
comment to explain why this call is there. phylink_speed_up() is
always called in __stmmac_open(), and already has a comment.
Russell King (Oracle) [Fri, 15 Aug 2025 11:32:10 +0000 (12:32 +0100)]
net: stmmac: use core wake IRQ support
The PM core provides management of wake IRQs along side setting the
device wake enable state. In order to use this, we need to register
the interrupt used to wakeup the system using devm_pm_set_wake_irq()
or dev_pm_set_wake_irq(). The core will then enable or disable IRQ
wake state on this interrupt as appropriate, depending on the
device_set_wakeup_enable() state. device_set_wakeup_enable() does not
care about having balanced enable/disable calls.
Make use of this functionality, rather than explicitly managing the
IRQ enable state in the set_wol() ethtool op. This removes the IRQ
wake state management from stmmac.
Printing "stmmac: wakeup enable" to the kernel log isn't useful - it
doesn't identify the adapter, and is effectively nothing more than a
debugging print. This information can be discovered by looking at
/sys/device.../power/wakeup as the device_set_wakeup_enable() call
updates this sysfs file.
Russell King (Oracle) [Fri, 15 Aug 2025 11:32:00 +0000 (12:32 +0100)]
net: stmmac: remove redundant WoL option validation
The core ethtool API validates the WoL options passed from userspace
against the support which the driver reports from its get_wol() method,
returning EINVAL if an unsupported mode is requested.
Therefore, there is no need for stmmac to implement its own validation.
Remove this unnecessary code.
See ethnl_set_wol() in net/ethtool/wol.c and ethtool_set_wol() in
net/ethtool/ioctl.c.