Alex Elder [Fri, 19 Apr 2024 15:18:00 +0000 (10:18 -0500)]
net: ipa: kill ipa_version_supported()
The only place ipa_version_supported() is called is in the probe
function. The version comes from the match data. Rather than
checking the version validity separately, just consider anything
that has match data to be supported.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Alex Elder [Fri, 19 Apr 2024 15:17:59 +0000 (10:17 -0500)]
net: ipa: fix two minor ipa_cmd problems
In "ipa_cmd.h", ipa_cmd_data_valid() is declared, but that function
does not exist. So delete that declaration.
Also, for some reason ipa_cmd_init() never gets called. It isn't
really critical--it just validates that some memory offsets and a
size can be represented in some register fields, and they won't fail
with current data. Regardless, call the function in ipa_probe().
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
The FILT_ROUT_HASH_EN register is only used for IPA v4.2. There,
routing and filter table hashing are not supported, and so the
register must be written to disable the feature. No other version
uses this register, so its definition can be removed. If we need to
use these some day (for example, explicitly enable the feature) this
commit can be reverted.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Alex Elder [Fri, 19 Apr 2024 15:17:55 +0000 (10:17 -0500)]
net: ipa: call device_init_wakeup() earlier
Currently, enabling wakeup for the IPA device doesn't occur until
the setup phase of initialization (in ipa_power_setup()).
There is no need to delay doing that, however. We can conveniently
do it during the config phase, in ipa_interrupt_config(), where we
enable power management wakeup mode for the IPA interrupt.
Moving the device_init_wakeup() out of ipa_power_setup() leaves that
function empty, so it can just be eliminated.
Similarly, rearrange all of the matching inverse calls, disabling
device wakeup in ipa_interrupt_deconfig() and removing that function
as well.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Alex Elder [Fri, 19 Apr 2024 15:17:53 +0000 (10:17 -0500)]
net: ipa: maintain bitmap of suspend-enabled endpoints
Keep track of which endpoints have the SUSPEND IPA interrupt enabled
in a variable-length bitmap. This will be used in the next patch to
allow the SUSPEND interrupt type to be disabled except when needed.
Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
The series got born as a result of the discussions around the recent
Yanteng' series adding the Loongson LS7A1000, LS2K1000, LS7A2000, LS2K2000
MACs support: Link: https://lore.kernel.org/netdev/fu3f6uoakylnb6eijllakeu5i4okcyqq7sfafhp5efaocbsrwe@w74xe7gb6x7p
In particular the Yanteng' patchset needed to implement the Loongson
MAC-specific constraints applied to the link speed and link duplex mode.
As a result of the discussion with Russel the next preliminary patch was
born: Link: https://lore.kernel.org/netdev/df31e8bcf74b3b4ddb7ddf5a1c371390f16a2ad5.1712917541.git.siyanteng@loongson.cn
The patch above was a temporal solution utilized by Yanteng for further
developments and to move on with the on-going review. This patchset is a
refactored version of that single patch with formatting required for the
fixes patches.
The main part of the series has already been merged in on v1 stage. The
leftover is the cleanup patches which rename
stmmac_ops::phylink_get_caps() callback to stmmac_ops::update_caps() and
move the MAC-capabilities init/re-init to the phylink MAC-capabilities
getter.
net: stmmac: Move MAC caps init to phylink MAC caps getter
After a set of recent fixes the stmmac_phy_setup() and
stmmac_reinit_queues() methods have turned to having some duplicated code.
Let's get rid from the duplication by moving the MAC-capabilities
initialization to the PHYLINK MAC-capabilities getter. The getter is
called during each network device interface open/close cycle. So the
MAC-capabilities will be initialized in generic device open procedure and
in case of the Tx/Rx queues re-initialization as the original code
semantics implies.
net: stmmac: Rename phylink_get_caps() callback to update_caps()
Since recent commits the stmmac_ops::phylink_get_caps() callback has no
longer been responsible for the phylink MAC capabilities getting, but
merely updates the MAC capabilities in the mac_device_info::link::caps
field. Rename the callback to comply with the what the method does now.
====================
Enable RX HW timestamp for PTP packets using CPTS FIFO
The CPSW offers two mechanisms for communicating packet ingress timestamp
information to the host.
The first mechanism is via the CPTS Event FIFO which records timestamp
when triggered by certain events. One such event is the reception of an
Ethernet packet with a specified EtherType field. This is used to capture
ingress timestamps for PTP packets. With this mechanism the host must
read the timestamp (from the CPTS FIFO) separately from the packet payload
which is delivered via DMA.
In the second mechanism of timestamping, CPSW driver enables hardware
timestamping for all received packets by setting the TSTAMP_EN bit in
CPTS_CONTROL register, which directs the CPTS module to timestamp all
received packets, followed by passing timestamp via DMA descriptors.
This mechanism is responsible for triggering errata i2401:
"CPSW: Host Timestamps Cause CPSW Port to Lock up."
The errata affects all K3 SoCs. Link to errata for AM64x:
https://www.ti.com/lit/er/sprz457h/sprz457h.pdf
As a workaround we can use first mechanism to timestamp received
packets.
====================
net: ethernet: ti: am65-cpsw/ethtool: Enable RX HW timestamp only for PTP packets
In the current mechanism of timestamping, am65-cpsw-nuss driver
enables hardware timestamping for all received packets by setting
the TSTAMP_EN bit in CPTS_CONTROL register, which directs the CPTS
module to timestamp all received packets, followed by passing
timestamp via DMA descriptors. This mechanism causes CPSW Port to
Lock up.
To prevent port lock up, don't enable rx packet timestamping by
setting TSTAMP_EN bit in CPTS_CONTROL register. The workaround for
timestamping received packets is to utilize the CPTS Event FIFO
that records timestamps corresponding to certain events. The CPTS
module is configured to generate timestamps for Multicast Ethernet,
UDP/IPv4 and UDP/IPv6 PTP packets.
Update supported hwtstamp_rx_filters values for CPSW's timestamping
capability.
====================
Read PHY address of switch from device tree on MT7530 DSA subdriver
This patch series makes the driver read the PHY address the switch listens
on from the device tree which, in result, brings support for MT7530
switches listening on a different PHY address than 31. And the patch series
simplifies the core operations.
The core_rmw() function calls core_read_mmd_indirect() to read the
requested register, and then calls core_write_mmd_indirect() to write the
requested value to the register. Because Clause 22 is used to access Clause
45 registers, some operations on core_write_mmd_indirect() are
unnecessarily run. Get rid of core_read_mmd_indirect() and
core_write_mmd_indirect(), and run only the necessary operations on
core_write() and core_rmw().
Reviewed-by: Daniel Golle <daniel@makrotopia.org> Tested-by: Daniel Golle <daniel@makrotopia.org> Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
net: dsa: mt7530-mdio: read PHY address of switch from device tree
Read the PHY address the switch listens on from the reg property of the
switch node on the device tree. This change brings support for MT7530
switches on boards with such bootstrapping configuration where the switch
listens on a different PHY address than the hardcoded PHY address on the
driver, 31.
As described on the "MT7621 Programming Guide v0.4" document, the MT7530
switch and its PHYs can be configured to listen on the range of 7-12,
15-20, 23-28, and 31 and 0-4 PHY addresses.
There are operations where the switch PHY registers are used. For the PHY
address of the control PHY, transform the MT753X_CTRL_PHY_ADDR constant
into a macro and use it. The PHY address for the control PHY is 0 when the
switch listens on 31. In any other case, it is one greater than the PHY
address the switch listens on.
Reviewed-by: Daniel Golle <daniel@makrotopia.org> Tested-by: Daniel Golle <daniel@makrotopia.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
====================
netlink: Add nftables spec w/ multi messages
This series adds a ynl spec for nftables and extends ynl with a --multi
command line option that makes it possible to send transactional batches
for nftables.
This series includes a patch for nfnetlink which adds ACK processing for
batch begin/end messages. If you'd prefer that to be sent separately to
nf-next then I can do so, but I included it here so that it gets seen in
context.
- ynl reports errors by raising an NlError exception so only the first
error gets reported. This could be changed to add errors to the list
of responses so that multiple errors could be reported.
- If any message does not get a response (e.g. batch-begin w/o patch 2)
then ynl waits indefinitely. A recv timeout could be added which
would allow ynl to terminate.
====================
Donald Hunter [Thu, 18 Apr 2024 10:47:37 +0000 (11:47 +0100)]
netfilter: nfnetlink: Handle ACK flags for batch messages
The NLM_F_ACK flag is ignored for nfnetlink batch begin and end
messages. This is a problem for ynl which wants to receive an ack for
every message it sends, not just the commands in between the begin/end
messages.
Add processing for ACKs for begin/end messages and provide responses
when requested.
I have checked that iproute2, pyroute2 and systemd are unaffected by
this change since none of them use NLM_F_ACK for batch begin/end.
Donald Hunter [Thu, 18 Apr 2024 10:47:36 +0000 (11:47 +0100)]
tools/net/ynl: Add multi message support to ynl
Add a "--multi <do-op> <json>" command line to ynl that makes it
possible to add several operations to a single netlink request payload.
The --multi command line option is repeated for each operation.
This is used by the nftables family for transaction batches. For
example:
Donald Hunter [Thu, 18 Apr 2024 10:47:35 +0000 (11:47 +0100)]
tools/net/ynl: Fix extack decoding for directional ops
NetlinkProtocol.decode() was looking up ops by response value which breaks
when it is used for extack decoding of directional ops. Instead, pass
the op to decode().
To have per request buffer notifications each zerocopy io_uring send
request allocates a new ubuf_info. However, as an skb can carry only
one uarg, it may force the stack to create many small skbs hurting
performance in many ways.
The patchset implements notification, i.e. an io_uring's ubuf_info
extension, stacking. It attempts to link ubuf_info's into a list,
allowing to have multiple of them per skb.
liburing/examples/send-zerocopy shows up 6 times performance improvement
for TCP with 4KB bytes per send, and levels it with MSG_ZEROCOPY. Without
the patchset it requires much larger sends to utilise all potential.
Pavel Begunkov [Fri, 19 Apr 2024 11:08:40 +0000 (12:08 +0100)]
net: add callback for setting a ubuf_info to skb
At the moment an skb can only have one ubuf_info associated with it,
which might be a performance problem for zerocopy sends in cases like
TCP via io_uring. Add a callback for assigning ubuf_info to skb, this
way we will implement smarter assignment later like linking ubuf_info
together.
Note, it's an optional callback, which should be compatible with
skb_zcopy_set(), that's because the net stack might potentially decide
to clone an skb and take another reference to ubuf_info whenever it
wishes. Also, a correct implementation should always be able to bind to
an skb without prior ubuf_info, otherwise we could end up in a situation
when the send would not be able to progress.
Pavel Begunkov [Fri, 19 Apr 2024 11:08:39 +0000 (12:08 +0100)]
net: extend ubuf_info callback to ops structure
We'll need to associate additional callbacks with ubuf_info, introduce
a structure holding ubuf_info callbacks. Apart from a more smarter
io_uring notification management introduced in next patches, it can be
used to generalise msg_zerocopy_put_abort() and also store
->sg_from_iter, which is currently passed in struct msghdr.
====================
tcp: avoid sending too small packets
tcp_sendmsg() cooks 'large' skbs, that are later split
if needed from tcp_write_xmit().
After a split, the leftover skb size is smaller than the optimal
size, and this causes a performance drop.
In this series, tcp_grow_skb() helper is added to shift
payload from the second skb in the write queue to the first
skb to always send optimal sized skbs.
This increases TSO efficiency, and decreases number of ACK
packets.
====================
tcp_tso_should_defer() heuristic is defeated, because after a split of
a packet in write queue for whatever reason (this might be a too small
CWND or a small enough pacing_rate),
the leftover packet in the queue is smaller than the optimal size.
It is time to try to make 'leftover packets' bigger so that
tcp_tso_should_defer() can give its full potential.
After this patch, we can see the following output:
Eric Dumazet [Thu, 18 Apr 2024 21:45:59 +0000 (21:45 +0000)]
tcp: call tcp_set_skb_tso_segs() from tcp_write_xmit()
tcp_write_xmit() calls tcp_init_tso_segs()
to set gso_size and gso_segs on the packet.
tcp_init_tso_segs() requires the stack to maintain
an up to date tcp_skb_pcount(), and this makes sense
for packets in rtx queue. Not so much for packets
still in the write queue.
In the following patch, we don't want to deal with
tcp_skb_pcount() when moving payload from 2nd
skb to 1st skb in the write queue.
Jakub Kicinski [Mon, 22 Apr 2024 21:22:22 +0000 (14:22 -0700)]
Merge branch 'mlx5e-per-queue-coalescing'
Tariq Toukan says:
====================
mlx5e per-queue coalescing
This patchset adds ethtool per-queue coalescing support for the mlx5e
driver.
The series introduce some changes needed as preparations for the final
patch which adds the support and implements the callbacks. Main
changes:
- DIM code movements into its own header file.
- Switch to dynamic allocation of the DIM struct in the RQs/SQs.
- Allow coalescing config change without channels reset when possible.
====================
net/mlx5e: Support updating coalescing configuration without resetting channels
When CQE mode or DIM state is changed, gracefully reconfigure channels to
handle new configuration. Previously, would create new channels that would
reflect the changes rather than update the original channels.
Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net/mlx5e: Dynamically allocate DIM structure for SQs/RQs
Make it possible for the DIM structure to be torn down while an SQ or RQ is
still active. Changing the CQ period mode is an example where the previous
sampling done with the DIM structure would need to be invalidated.
Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net/mlx5e: Use DIM constants for CQ period mode parameter
Use core DIM CQ period mode enum values for the CQ parameter for the period
mode. Translate the value to the specific mlx5 device constant for the
selected period mode when creating a CQ. Avoid needing to translate mlx5
device constants to DIM constants for core DIM functionality.
Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net/mlx5e: Move DIM function declarations to en/dim.h
Create a header specifically for DIM-related declarations. Move existing
DIM-specific functionality from en.h. Future DIM-related functionality will
be declared in en/dim.h in subsequent patches.
Co-developed-by: Nabil S. Alramli <dev@nalramli.com> Signed-off-by: Nabil S. Alramli <dev@nalramli.com> Co-developed-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Joe Damato <jdamato@fastly.com> Signed-off-by: Rahul Rameshbabu <rrameshbabu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240419080445.417574-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
net: dsa: vsc73xx: convert to PHYLINK and do some cleanup
This patch series is a result of splitting a larger patch series [0],
where some parts needed to be refactored.
The first patch switches from a poll loop to read_poll_timeout.
The second patch is a simple conversion to phylink because adjust_link
won't work anymore.
The third patch is preparation for future use. Using the
"phy_interface_mode_is_rgmii" macro allows for the proper recognition
of all RGMII modes.
Patches 4-5 involve some cleanup: The fourth patch introduces
a definition with the maximum number of ports to avoid using
magic numbers. The next one fills in documentation.
net: dsa: vsc73xx: use macros for rgmii recognition
It's preparation for future use. At this moment, the RGMII port is used
only for a connection to the MAC interface, but in the future, someone
could connect a PHY to it. Using the "phy_interface_mode_is_rgmii" macro
allows for the proper recognition of all RGMII modes.
Suggested-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Reviewed-by: Vladimir Oltean <olteanv@gmail.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Link: https://lore.kernel.org/r/20240417205048.3542839-4-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: dsa: vsc73xx: use read_poll_timeout instead delay loop
Switch the delay loop during the Arbiter empty check from
vsc73xx_adjust_link() to use read_poll_timeout(). Functionally,
one msleep() call is eliminated at the end of the loop in the timeout
case.
As Russell King suggested:
"This [change] avoids the issue that on the last iteration, the code reads
the register, tests it, finds the condition that's being waiting for is
false, _then_ waits and end up printing the error message - that last
wait is rather useless, and as the arbiter state isn't checked after
waiting, it could be that we had success during the last wait."
Suggested-by: Russell King <linux@armlinux.org.uk> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Florian Fainelli <florian.fainelli@broadcom.com> Signed-off-by: Pawel Dembicki <paweldembicki@gmail.com> Link: https://lore.kernel.org/r/20240417205048.3542839-2-paweldembicki@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Eric Dumazet [Fri, 19 Apr 2024 07:19:42 +0000 (07:19 +0000)]
tcp: do not export tcp_twsk_purge()
After commit 1eeb50435739 ("tcp/dccp: do not care about
families in inet_twsk_purge()") tcp_twsk_purge() is
no longer potentially called from a module.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: David S. Miller <davem@davemloft.net>
octeontx2-pf: Add support for offload tc with skbedit mark action
Support offloading of skbedit mark action.
For example, to mark with 0x0008, with dest ip 60.60.60.2 on eth2
interface:
# tc qdisc add dev eth2 ingress
# tc filter add dev eth2 ingress protocol ip flower \
dst_ip 60.60.60.2 action skbedit mark 0x0008 skip_sw
Signed-off-by: Geetha sowjanya <gakula@marvell.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
net: ethernet: ti: am65-cpsw: Fix xdp_rxq error for disabled port
When an ethX port is disabled in the device tree, an error is returned
by xdp_rxq_info_reg() function while transitioning the CPSW device to
the up state. The message 'Missing net_device from driver' is output.
This patch fixes the issue by registering xdp_rxq info only if ethX
port is enabled (i.e. ndev pointer is not NULL).
Fixes: 8acacc40f733 ("net: ethernet: ti: am65-cpsw: Add minimal XDP support") Link: https://lore.kernel.org/all/260d258f-87a1-4aac-8883-aab4746b32d8@ti.com/ Reported-by: Siddharth Vadapalli <s-vadapalli@ti.com> Closes: https://gist.github.com/Siddharth-Vadapalli-at-TI/5ed0e436606001c247a7da664f75edee Signed-off-by: Julien Panis <jpanis@baylibre.com> Signed-off-by: David S. Miller <davem@davemloft.net>
To be able to constify instances of struct ctl_tables it is necessary to
remove ways through which non-const versions are exposed from the
sysctl core.
One of these is the ctl_table_arg member of struct ctl_table_header.
Constify this reference as a prerequisite for the full constification of
struct ctl_table instances.
No functional change.
Signed-off-by: Thomas Weißschuh <linux@weissschuh.net> Reviewed-by: Kees Cook <keescook@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
testing: make netfilter selftests functional in vng environment
This is the second batch of the netfilter selftest move.
Changes since v1:
- makefile and kernel config are updated to have all required features
- fix makefile with missing bits to make kselftest-install work
- test it via vng as per
https://github.com/linux-netdev/nipa/wiki/How-to-run-netdev-selftests-CI-style
(Thanks Jakub!)
- squash a few fixes, e.g. nft_queue.sh v1 had a race w. NFNETLINK_QUEUE=m
- add a settings file with 8m timeout, for nft_concat_range.sh sake.
That script can be sped up a bit, I think, but its not contained in
this batch yet.
- toss the first two bogus rebase artifacts (Matthieu Baerts)
scripts are moved to lib.sh infra. This allows to use busywait helper
and ditch various 'sleep 2' all over the place.
selftests: netfilter: update makefiles and kernel config
Jakub reports the Makefile missed a few updates to make kselftest-install
work for the netfilter tests and points out that config file lacks many
dependencies such as VETH support.
The settings file (timeout 8m) is added for nft_concat_range.sh script
which can take several minutes to complete.
Fixes: 3f189349e52a ("selftests: netfilter: move to net subdir") Reported-by: Jakub Kicinski <kuba@kernel.org> Closes: https://lore.kernel.org/all/20240412175413.04e5e616@kernel.org/ Signed-off-by: Florian Westphal <fw@strlen.de> Link: https://lore.kernel.org/r/20240418152744.15105-13-fw@strlen.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
selftests: netfilter: xt_string.sh: move to lib.sh infra
Intentional changes:
- Use socat instead of netcat
- Use a temporary file instead of pipe, else packets do not match
"-m string" rules, multiple writes to the pipe cause multiple packets,
but this needs only one to work.
selftests: netfilter: nft_zones_many.sh: move to lib.sh infra
Also do shellcheck cleanups here, no functional changes intended.
When running tests via vng tool, the packetpath insertion test fails:
dd: failed to open '/dev/stdout': Device or resource busy
selftests: netfilter: nft_queue.sh: move to lib.sh infra
- switch to socat, like other tests
- use buswait helper to test once listener netns is ready
- do not generate multiple input test files, only generate
one and use cleanup hook to remove it, like other temporary files.
Russell King (Oracle) [Thu, 18 Apr 2024 10:51:21 +0000 (11:51 +0100)]
net: dsa: xrs700x: fix missing initialisation of ds->phylink_mac_ops
The kernel build bot identified the following mistake in the recently
merged 860a9bed2651 ("net: dsa: xrs700x: provide own phylink MAC
operations") patch:
drivers/net/dsa/xrs700x/xrs700x.c:714:37: warning: 'xrs700x_phylink_mac_ops' defined but not used [-Wunused-const-variable=]
714 | static const struct phylink_mac_ops xrs700x_phylink_mac_ops = {
| ^~~~~~~~~~~~~~~~~~~~~~~
Fix the omitted assignment of ds->phylink_mac_ops.
Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 19 Apr 2024 10:38:04 +0000 (11:38 +0100)]
Merge branch 'net-rps-lockless'
Jason Xing says:
====================
locklessly protect left members in struct rps_dev_flow
From: Jason Xing <kernelxing@tencent.com>
Since Eric did a more complicated locklessly change to last_qtail
member[1] in struct rps_dev_flow, the left members are easier to change
as the same.
One thing important I would like to share by qooting Eric:
"rflow is located in rxqueue->rps_flow_table, it is thus private to current
thread. Only one cpu can service an RX queue at a time."
So we only pay attention to the reader in the rps_may_expire_flow() and
writer in the set_rps_cpu(). They are in the two different contexts.
Jason Xing [Thu, 18 Apr 2024 07:36:03 +0000 (15:36 +0800)]
net: rps: locklessly access rflow->cpu
This is the last member in struct rps_dev_flow which should be
protected locklessly. So finish it.
Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Xing [Thu, 18 Apr 2024 07:36:02 +0000 (15:36 +0800)]
net: rps: protect filter locklessly
As we can see, rflow->filter can be written/read concurrently, so
lockless access is needed.
Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Jason Xing [Thu, 18 Apr 2024 07:36:01 +0000 (15:36 +0800)]
net: rps: protect last_qtail with rps_input_queue_tail_save() helper
Removing one unnecessary reader protection and add another writer
protection to finish the locklessly proctection job.
Note: the removed READ_ONCE() is not needed because we only have to protect
the locklessly reader in the different context (rps_may_expire_flow()).
Signed-off-by: Jason Xing <kernelxing@tencent.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Fri, 19 Apr 2024 10:34:08 +0000 (11:34 +0100)]
Merge branch 'net_sched-dump-no-rtnl'
Eric Dumazet says:
====================
net_sched: first series for RTNL-less qdisc dumps
Medium term goal is to implement "tc qdisc show" without needing
to acquire RTNL.
This first series makes the requested changes in 14 qdisc.
Notes :
- RTNL is still held in "tc qdisc show", more changes are needed.
- Qdisc returning many attributes might want/need to provide
a consistent set of attributes. If that is the case, their
dump() method could acquire the qdisc spinlock, to pair the
spinlock acquision in their change() method.
V2: Addressed Simon feedback (Thanks a lot Simon)
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Eric Dumazet [Thu, 18 Apr 2024 07:32:36 +0000 (07:32 +0000)]
net_sched: cake: implement lockless cake_dump()
Instead of relying on RTNL, cake_dump() can use READ_ONCE()
annotations, paired with WRITE_ONCE() ones in cake_change().
v2: addressed Simon feedback in V1: https://lore.kernel.org/netdev/20240417083549.GA3846178@kernel.org/
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Toke Høiland-Jørgensen <toke@toke.dk> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Toke Høiland-Jørgensen <toke@toke.dk> Signed-off-by: David S. Miller <davem@davemloft.net>
gve: Remove qpl_cfg struct since qpl_ids map with queues respectively
The qpl_cfg struct was used to make sure that no two different queues
are using QPL with the same qpl_id. We can remove that qpl_cfg struct
since now the qpl_ids map with the queues respectively as follows:
For tx queues: qpl_id = tx_qid
For rx queues: qpl_id = max_tx_queues + rx_qid
And when XDP is used, it will need the user to reduce the tx queues to
be at most half of the max_tx_queues. Then it will use the same number
of tx queues starting from the end of existing tx queues for XDP. So the
XDP queues will not exceed the max_tx_queues range and will not overlap
with the rx queues, where the qpl_ids will not have overlapping too.
Considering of that, we remove the qpl_cfg struct to get the qpl_id
directly based on the queue id. Unless we are erroneously allocating a
rx/tx queue that has already been allocated, we would never allocate
the qpl with the same qpl_id twice. In that case, it should fail much
earlier than the QPL assignment.
Jakub Kicinski [Fri, 19 Apr 2024 01:27:40 +0000 (18:27 -0700)]
Merge branch 'net: Add support for Power over Ethernet (PoE)'
Kory Maincent says:
====================
net: Add support for Power over Ethernet (PoE)
This patch series aims at adding support for PoE (Power over Ethernet),
based on the already existing support for PoDL (Power over Data Line)
implementation. In addition, it adds support for two specific PoE
controller, the Microchip PD692x0 and the TI TPS23881.
====================
net: pse-pd: Use regulator framework within PSE framework
Integrate the regulator framework to the PSE framework for enhanced
access to features such as voltage, power measurement, and limits, which
are akin to regulators. Additionally, PSE features like port priorities
could potentially enhance the regulator framework. Note that this
integration introduces some implementation complexity, including wrapper
callbacks, but the potential benefits make it worthwhile.
Regulator are using enable counter with specific behavior.
Two calls to regulator_disable will trigger kernel warnings.
If the counter exceeds one, regulator_disable call won't disable the
PSE PI. These behavior isn't suitable for PSE control.
Added a boolean 'enabled' state to prevent multiple calls to
regulator_enable/disable. These calls will only be called from PSE
framework as it won't have any regulator children, therefore no mutex are
needed to safeguards this boolean.
regulator_get needs the consumer device pointer. Use PSE as regulator
provider and consumer device until we have RJ45 ports represented in
the Kernel.
net: pse-pd: Add support for setup_pi_matrix callback
Implement setup_pi_matrix callback to configure the PSE PI matrix. This
functionality is invoked before registering the PSE and following the core
parsing of the pse_pis devicetree subnode.
dt-bindings: net: pse-pd: Add another way of describing several PSE PIs
PSE PI setup may encompass multiple PSE controllers or auxiliary circuits
that collectively manage power delivery to one Ethernet port.
Such configurations might support a range of PoE standards and require
the capability to dynamically configure power delivery based on the
operational mode (e.g., PoE2 versus PoE4) or specific requirements of
connected devices. In these instances, a dedicated PSE PI node becomes
essential for accurately documenting the system architecture. This node
would serve to detail the interactions between different PSE controllers,
the support for various PoE modes, and any additional logic required to
coordinate power delivery across the network infrastructure.
The old usage of "#pse-cells" is unsuficient as it carries only the PSE PI
index information.
The Power Sourcing Equipment Power Interface (PSE PI) plays a pivotal role
in the architecture of Power over Ethernet (PoE) systems. It is essentially
a blueprint that outlines how one or multiple power sources are connected
to the eight-pin modular jack, commonly known as the Ethernet RJ45 port.
This connection scheme is crucial for enabling the delivery of power
alongside data over Ethernet cables.
This patch adds support for getting the PSE controller node through PSE PI
device subnode.
This supports adds a way to get the PSE PI id from the pse_pi devicetree
subnode of a PSE controller node simply by reading the reg property.