Francis Pravin [Thu, 9 Jan 2025 23:51:37 +0000 (05:21 +0530)]
nvme-pci: use correct size to free the hmb buffer
dev->host_mem_size value is updated only after the successful buffer
allocation of hmb descriptor. Otherwise, it may have some undefined value.
So, use the correct size to free the hmb buffer when the hmb descriptor
buffer allocation failed.
Signed-off-by: Francis Pravin <francis.p@samsung.com> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
Keisuke Nishimura [Mon, 16 Dec 2024 15:27:20 +0000 (16:27 +0100)]
nvme: Add error path for xa_store in nvme_init_effects
The xa_store() may fail due to memory allocation failure because there
is no guarantee that the index NVME_CSI_NVM is already used. This fix
introduces a new function to handle the error path.
Fixes: cc115cbe12d9 ("nvme: always initialize known command effects") Signed-off-by: Keisuke Nishimura <keisuke.nishimura@inria.fr> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:51 +0000 (13:59 +0900)]
Documentation: Document the NVMe PCI endpoint target driver
Add a documentation file
(Documentation/nvme/nvme-pci-endpoint-target.rst) for the new NVMe PCI
endpoint target driver. This provides an overview of the driver
requirements, capabilities and limitations. A user guide describing how
to setup a NVMe PCI endpoint device using this driver is also provided.
This document is made accessible also from the PCI endpoint
documentation using a link. Furthermore, since the existing nvme
documentation was not accessible from the top documentation index, an
index file is added to Documentation/nvme and this index listed as
"NVMe Subsystem" in the "Storage interfaces" section of the subsystem
API index.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:50 +0000 (13:59 +0900)]
nvmet: New NVMe PCI endpoint function target driver
Implement a PCI target driver using the PCI endpoint framework. This
requires hardware with a PCI controller capable of executing in endpoint
mode.
The PCI endpoint framework is used to set up a PCI endpoint function
and its BAR compatible with a NVMe PCI controller. The framework is also
used to map local memory to the PCI address space to execute MMIO
accesses for retrieving NVMe commands from submission queues and posting
completion entries to completion queues. If supported, DMA is used for
command retreival and command data transfers, based on the PCI address
segments indicated by the command using either PRPs or SGLs.
The NVMe target driver relies on the NVMe target core code to execute
all commands isssued by the host. The PCI target driver is mainly
responsible for the following:
- Initialization and teardown of the endpoint device and its backend
PCI target controller. The PCI target controller is created using a
subsystem and a port defined through configfs. The port used must be
initialized with the "pci" transport type. The target controller is
allocated and initialized when the PCI endpoint is started by binding
it to the endpoint PCI device (nvmet_pci_epf_epc_init() function).
- Manage the endpoint controller state according to the PCI link state
and the actions of the host (e.g. checking the CC.EN register) and
propagate these actions to the PCI target controller. Polling of the
controller enable/disable is done using a delayed work scheduled
every 5ms (nvmet_pci_epf_poll_cc() function). This work is started
whenever the PCI link comes up (nvmet_pci_epf_link_up() notifier
function) and stopped when the PCI link comes down
(nvmet_pci_epf_link_down() notifier function).
nvmet_pci_epf_poll_cc() enables and disables the PCI controller using
the functions nvmet_pci_epf_enable_ctrl() and
nvmet_pci_epf_disable_ctrl(). The controller admin queue is created
using nvmet_pci_epf_create_cq(), which calls nvmet_cq_create(), and
nvmet_pci_epf_create_sq() which uses nvmet_sq_create().
nvmet_pci_epf_disable_ctrl() always resets the PCI controller to its
initial state so that nvmet_pci_epf_enable_ctrl() can be called
again. This ensures correct operation if, for instance, the host
reboots causing the PCI link to be temporarily down.
- Manage the controller admin and I/O submission queues using local
memory. Commands are obtained from submission queues using a work
item that constantly polls the doorbells of all submissions queues
(nvmet_pci_epf_poll_sqs() function). This work is started whenever
the controller is enabled (nvmet_pci_epf_enable_ctrl() function) and
stopped when the controller is disabled (nvmet_pci_epf_disable_ctrl()
function). When new commands are submitted by the host, DMA transfers
are used to retrieve the commands.
- Initiate the execution of all admin and I/O commands using the target
core code, by calling a requests execute() function. All commands are
individually handled using a per-command work item
(nvmet_pci_epf_iod_work() function). A command overall execution
includes: initializing a struct nvmet_req request for the command,
using nvmet_req_transfer_len() to get a command data transfer length,
parse the command PRPs or SGLs to get the PCI address segments of
the command data buffer, retrieve data from the host (if the command
is a write command), call req->execute() to execute the command and
transfer data to the host (for read commands).
- Handle the completions of commands as notified by the
->queue_response() operation of the PCI target controller
(nvmet_pci_epf_queue_response() function). Completed commands are
added to a list of completed command for their CQ. Each CQ list of
completed command is processed using a work item
(nvmet_pci_epf_cq_work() function) which posts entries for the
completed commands in the CQ memory and raise an IRQ to the host to
signal the completion. IRQ coalescing is supported as mandated by the
NVMe base specification for PCI controllers. Of note is that
completion entries are transmitted to the host using MMIO, after
mapping the completion queue memory to the host PCI address space.
Unlike for retrieving commands from SQs, DMA is not used as it
degrades performance due to the transfer serialization needed (which
delays completion entries transmission).
The configuration of a NVMe PCI endpoint controller is done using
configfs. First the NVMe PCI target controller configuration must be
done to set up a subsystem and a port with the "pci" addr_trtype
attribute. The subsystem can be setup using a file or block device
backed namespace or using a passthrough NVMe device. After this, the
PCI endpoint can be configured and bound to the PCI endpoint controller
to start the NVMe endpoint controller.
In order to not overcomplicate this initial implementation of an
endpoint PCI target controller driver, protection information is not
for now supported. If the PCI controller port and namespace are
configured with protection information support, an error will be
returned when the controller is created and initialized when the
endpoint function is started. Protection information support will be
added in a follow-up patch series.
Using a Rock5B board (Rockchip RK3588 SoC, PCI Gen3x4 endpoint
controller) with a target PCI controller setup with 4 I/O queues and a
null_blk block device as a namespace, the maximum performance using fio
was measured at 131 KIOPS for random 4K reads and up to 2.8 GB/S
throughput. Some data points are:
The NVMe PCI endpoint target driver is not intended for production use.
It is a tool for learning NVMe, exploring existing features and testing
implementations of new NVMe features.
Co-developed-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Reviewed-by: Krzysztof Wilczyński <kwilczynski@kernel.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:49 +0000 (13:59 +0900)]
nvmet: Implement arbitration feature support
NVMe base specification v2.1 mandates support for the arbitration
feature (NVME_FEAT_ARBITRATION). Introduce the data structure
struct nvmet_feat_arbitration to define the high, medium and low
priority weight fields and the arbitration burst field of this feature
and implement the functions nvmet_get_feat_arbitration() and
nvmet_set_feat_arbitration() functions to get and set these fields.
Since there is no generic way to implement support for the arbitration
feature, these functions respectively use the controller get_feature()
and set_feature() operations to process the feature with the help of
the controller driver. If the controller driver does not implement these
operations and a get feature command or a set feature command for this
feature is received, the command is failed with an invalid field error.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:48 +0000 (13:59 +0900)]
nvmet: Implement interrupt config feature support
The NVMe base specifications v2.1 mandate supporting the interrupt
config feature (NVME_FEAT_IRQ_CONFIG) for PCI controllers. Introduce the
data structure struct nvmet_feat_irq_config to define the coalescing
disabled (cd) and interrupt vector (iv) fields of this feature and
implement the functions nvmet_get_feat_irq_config() and
nvmet_set_feat_irq_config() functions to get and set these fields. These
functions respectively use the controller get_feature() and
set_feature() operations to fill and handle the fields of struct
nvmet_feat_irq_config.
Support for this feature is prohibited for fabrics controllers. If a get
feature command or a set feature command for this feature is received
for a fabrics controller, the command is failed with an invalid field
error.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:47 +0000 (13:59 +0900)]
nvmet: Implement interrupt coalescing feature support
The NVMe base specifications v2.1 mandate Supporting the interrupt
coalescing feature (NVME_FEAT_IRQ_COALESCE) for PCI controllers.
Introduce the data structure struct nvmet_feat_irq_coalesce to define
the time and threshold (thr) fields of this feature and implement the
functions nvmet_get_feat_irq_coalesce() and
nvmet_set_feat_irq_coalesce() to get and set this feature. These
functions respectively use the controller get_feature() and
set_feature() operations to fill and handle the fields of struct
nvmet_feat_irq_coalesce.
While the Linux kernel nvme driver does not use this feature and thus
will not complain if it is not implemented, other major OSes fail
initializing the NVMe device if this feature support is missing.
Support for this feature is prohibited for fabrics controllers. If a get
feature or set feature command for this feature is received for a
fabrics controller, the command is failed with an invalid field error.
Suggested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:46 +0000 (13:59 +0900)]
nvmet: Implement host identifier set feature support
The NVMe specifications mandate support for the host identifier
set_features for controllers that also supports reservations. Satisfy
this requirement by implementing handling of the NVME_FEAT_HOST_ID
feature for the nvme_set_features command. This implementation is for
now effective only for PCI target controllers. For other controller
types, the set features command is failed with a NVME_SC_CMD_SEQ_ERROR
status as before.
As noted in the code, 128 bits host identifiers are supported since the
NVMe base specifications version 2.1 indicate in section 5.1.25.1.28.1
that "The controller may support a 64-bit Host Identifier...".
The RHII (Reservations and Host Identifier Interaction) bit of the
controller attribute (ctratt) field of the identify controller data is
also set to indicate that a host ID of "0" is supported but that the
host ID must be a non-zero value to use reservations.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
The implementation of some features cannot always be done generically by
the target core code. Arbitraion and IRQ coalescing features are
examples of such features: their implementation must be provided (at
least partially) by the target controller driver.
Introduce the set_feature() and get_feature() controller fabrics
operations (in struct nvmet_fabrics_ops) to allow supporting such
features.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:44 +0000 (13:59 +0900)]
nvmet: Do not require SGL for PCI target controller commands
Support for SGL is optional for the PCI transport. Modify
nvmet_req_init() to not require the NVME_CMD_SGL_METABUF command flag to
be set if the target controller transport type is NVMF_TRTYPE_PCI.
In addition to this, the NVMe base specification v2.1 mandate that all
admin commands use PRP, that is, have CDW0.PSDT cleared to 0. Modify
nvmet_parse_admin_cmd() to check this.
Finally, modify nvmet_check_transfer_len() and
nvmet_check_data_len_lte() to return the appropriate error status
depending on the command using SGL or PRP. Since for fabrics
nvmet_req_init() checks that a command uses SGL, always, this change
affects only PCI target controllers.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:43 +0000 (13:59 +0900)]
nvmet: Add support for I/O queue management admin commands
The I/O submission queue management admin commands
(nvme_admin_delete_sq, nvme_admin_create_sq, nvme_admin_delete_cq,
and nvme_admin_create_cq) are mandatory admin commands for I/O
controllers using the PCI transport, that is, support for these commands
is mandatory for a a PCI target I/O controller.
Implement support for these commands by adding the functions
nvmet_execute_delete_sq(), nvmet_execute_create_sq(),
nvmet_execute_delete_cq() and nvmet_execute_create_cq() to set as the
execute method of requests for these commands. These functions will
return an invalid opcode error for any controller that is not a PCI
target controller. Support for the I/O queue management commands is also
reported in the command effect log of PCI target controllers (using
nvmet_get_cmd_effects_admin()).
Each management command is backed by a controller fabric operation
that can be defined by a PCI target controller driver to setup I/O
queues using nvmet_sq_create() and nvmet_cq_create() or delete I/O
queues using nvmet_sq_destroy().
As noted in a comment in nvmet_execute_create_sq(), we do not yet
support sharing a single CQ between multiple SQs.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:42 +0000 (13:59 +0900)]
nvmet: Introduce nvmet_sq_create() and nvmet_cq_create()
Introduce the new functions nvmet_sq_create() and nvmet_cq_create() to
allow a target driver to initialize and setup admin and IO queues
directly, without needing to execute connect fabrics commands.
The helper functions nvmet_check_cqid() and nvmet_check_sqid() are
implemented to check the correctness of SQ and CQ IDs when
nvmet_sq_create() and nvmet_cq_create() are called.
nvmet_sq_create() and nvmet_cq_create() are primarily intended for use
with PCI target controller drivers and thus are not well integrated
with the current queue creation of fabrics controllers using the connect
command. These fabrices drivers are not modified to use these functions.
This simple implementation of SQ and CQ management for PCI target
controller drivers does not allow multiple SQs to share the same CQ,
similarly to other fabrics transports. This is a specification
violation. A more involved set of changes will follow to add support for
this required completion queue sharing feature.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:41 +0000 (13:59 +0900)]
nvmet: Introduce nvmet_req_transfer_len()
Add the new function nvmet_req_transfer_len() to parse a request command
to extract the transfer length of the command. This function
implementation relies on multiple helper functions for parsing I/O
commands (nvmet_io_cmd_transfer_len()), admin commands
(nvmet_admin_cmd_data_len()) and fabrics connect commands
(nvmet_connect_cmd_data_len).
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:40 +0000 (13:59 +0900)]
nvmet: Improve nvmet_alloc_ctrl() interface and implementation
Introduce struct nvmet_alloc_ctrl_args to define the arguments for
the function nvmet_alloc_ctrl() to avoid the need for passing a pointer
to a struct nvmet_req as an argument. This new data structure aggregates
together the arguments that were passed to nvmet_alloc_ctrl()
(subsysnqn, hostnqn and kato), together with the struct nvmet_req fields
used by nvmet_alloc_ctrl(), that is, the fields port, p2p_client, and
ops as input and the result and error_loc fields as output, as well as a
status field. nvmet_alloc_ctrl() is also changed to return a pointer
to the allocated and initialized controller structure instead of a
status code, as the status is now returned through the status field of
struct nvmet_alloc_ctrl_args.
The function nvmet_setup_p2p_ns_map() is changed to not take a pointer
to a struct nvmet_req as argument, instead, directly specify the
p2p_client device pointer needed as argument.
The code in nvmet_execute_admin_connect() that initializes a new target
controller after allocating it is moved into nvmet_alloc_ctrl().
The code that sets up an admin queue for the controller (and the call
to nvmet_install_queue()) remains in nvmet_execute_admin_connect().
Finally, nvmet_alloc_ctrl() is also exported to allow target drivers to
use this function directly to allocate and initialize a new controller
structure without the need to rely on a fabrics connect command request.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:39 +0000 (13:59 +0900)]
nvme: Add PCI transport type
Define the transport type NVMF_TRTYPE_PCI for PCI endpoint targets.
This transport type is defined using the value 0 which is reserved in
the NVMe base specifications v2.1 (Figure 294). Given that struct
nvmet_port are zeroed out on creation, to avoid having this transsport
type becoming the new default, nvmet_referral_make() and
nvmet_ports_make() are modified to initialize a port discovery address
transport type field (disc_addr.trtype) to NVMF_TRTYPE_MAX.
Any port using this transport type is also skipped and not reported in
the discovery log page (nvmet_execute_disc_get_log_page()).
The helper function nvmet_is_pci_ctrl() is also introduced to check if
a target controller uses the PCI transport.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:38 +0000 (13:59 +0900)]
nvmet: Add drvdata field to struct nvmet_ctrl
Allow a target driver to attach private data to a target controller by
adding the new field drvdata to struct nvmet_ctrl.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:37 +0000 (13:59 +0900)]
nvmet: Introduce nvmet_get_cmd_effects_admin()
In order to have a logically better organized implementation of the
effects log page, split out reporting the supported admin commands from
nvmet_get_cmd_effects_nvm() into the new function
nvmet_get_cmd_effects_admin().
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:36 +0000 (13:59 +0900)]
nvmet: Export nvmet_update_cc() and nvmet_cc_xxx() helpers
Make the function nvmet_update_cc() available to target drivers by
exporting it. To also facilitate the manipulation of the cc register
bits, move the inline helper functions nvmet_cc_en(), nvmet_cc_css(),
nvmet_cc_mps(), nvmet_cc_ams(), nvmet_cc_shn(), nvmet_cc_iosqes(), and
nvmet_cc_iocqes() from core.c to nvmet.h so that these functions can be
reused in target controller drivers.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Damien Le Moal [Sat, 4 Jan 2025 04:59:35 +0000 (13:59 +0900)]
nvmet: Add vendor_id and subsys_vendor_id subsystem attributes
Define the new vendor_id and subsys_vendor_id configfs attribute for
target subsystems. These attributes are respectively reported as the
vid field and as the ssvid field of the identify controller data of
a target controllers using the subsystem for which these attributes
are set.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Move the declaration of all helper functions converting NVMe command
opcodes and status codes into strings from drivers/nvme/host/nvme.h
into include/linux/nvme.h, together with the commands definitions.
This allows NVMe target drivers to call these functions without having
to include a host header file.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Rick Wertenbroek <rick.wertenbroek@gmail.com> Tested-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org> Signed-off-by: Keith Busch <kbusch@kernel.org>
Yongsoo Joo [Mon, 23 Dec 2024 01:05:17 +0000 (01:05 +0000)]
nvme: change return type of nvme_poll_cq() to bool
The nvme_poll_cq() function currently returns the number of CQEs
found, However, only one caller, nvme_poll(), requires a boolean
value to check whether any CQE was completed. The other callers do
not use the return value at all.
To better reflect its usage, update the return type of nvme_poll_cq()
from int to bool.
Signed-off-by: Yongsoo Joo <ysjoo@kookmin.ac.kr> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Keith Busch <kbusch@kernel.org>
Keisuke Nishimura [Fri, 20 Dec 2024 12:00:47 +0000 (13:00 +0100)]
nvme: Add error check for xa_store in nvme_get_effects_log
The xa_store() may fail due to memory allocation failure because there
is no guarantee that the index csi is already used. This fix adds an
error check of the return value of xa_store() in nvme_get_effects_log().
Fixes: 1cf7a12e09aa ("nvme: use an xarray to lookup the Commands Supported and Effects log") Signed-off-by: Keisuke Nishimura <keisuke.nishimura@inria.fr> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <kbusch@kernel.org>
Sagi Grimberg [Sat, 4 Jan 2025 21:27:11 +0000 (23:27 +0200)]
nvme-tcp: Fix I/O queue cpu spreading for multiple controllers
Since day-1 we are assigning the queue io_cpu very naively. We always
base the queue id (controller scope) and assign it its matching cpu
from the online mask. This works fine when the number of queues match
the number of cpu cores.
The problem starts when we have less queues than cpu cores. First, we
should take into account the mq_map and select a cpu within the cpus
that are assigned to this queue by the mq_map in order to minimize cross
numa cpu bouncing.
Second, even worse is that we don't take into account multiple
controllers may have assigned queues to a given cpu. As a result we may
simply compund more and more queues on the same set of cpus, which is
suboptimal.
We fix this by introducing global per-cpu counters that tracks the
number of queues assigned to each cpu, and we select the least used cpu
based on the mq_map and the per-cpu counters, and assign it as the queue
io_cpu.
The behavior for a single controller is slightly optimized by selecting
better cpu candidates by consulting with the mq_map, and multiple
controllers are spreading queues among cpu cores much better, resulting
in lower average cpu load, and less likelihood to hit hotspots.
Note that the accounting is not 100% perfect, but we don't need to be,
we're simply putting our best effort to select the best candidate cpu
core that we find at any given point.
Another byproduct is that every controller reset/reconnect may change
the queues io_cpu mapping, based on the current LRU accounting scheme.
Here is the baseline queue io_cpu assignment for 4 controllers, 2 queues
per controller, and 4 cpus on the host:
nvme1: queue 0: using cpu 0
nvme1: queue 1: using cpu 1
nvme2: queue 0: using cpu 0
nvme2: queue 1: using cpu 1
nvme3: queue 0: using cpu 0
nvme3: queue 1: using cpu 1
nvme4: queue 0: using cpu 0
nvme4: queue 1: using cpu 1
And this is the fixed io_cpu assignment:
nvme1: queue 0: using cpu 0
nvme1: queue 1: using cpu 2
nvme2: queue 0: using cpu 1
nvme2: queue 1: using cpu 3
nvme3: queue 0: using cpu 0
nvme3: queue 1: using cpu 2
nvme4: queue 0: using cpu 1
nvme4: queue 1: using cpu 3
Fixes: 3f2304f8c6d6 ("nvme-tcp: add NVMe over TCP host driver") Suggested-by: Hannes Reinecke <hare@kernel.org> Signed-off-by: Sagi Grimberg <sagi@grimberg.me> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de>
[fixed kbuild reported errors] Signed-off-by: Chaitanya Kulkarni <kch@nvidia.com> Signed-off-by: Keith Busch <kbusch@kernel.org>
Guixin Liu [Mon, 9 Dec 2024 01:53:44 +0000 (09:53 +0800)]
nvmet: handle rw's limited retry flag
In some scenarios, some multipath software setup places the
REQ_FAILFAST_DEV flag on I/O to prevent retries and immediately
switch to other paths for issuing I/O commands. This will reflect
on the NVMe read and write commands with the limited retry flag.
However, the current NVMe target side does not handle the limited
retry flag, and the target's underlying driver still retries the
I/O. This will result in the I/O not being quickly switched to
other paths, ultimately leading to increased I/O latency.
When the nvme target receive an rw command with limited retry flag,
handle it in block backend by setting the REQ_FAILFAST_DEV flag to
bio.
Signed-off-by: Guixin Liu <kanie@linux.alibaba.com> Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sagi Grimberg <sagi@grimberg.me> Signed-off-by: Keith Busch <kbusch@kernel.org>
Yu Kuai [Fri, 3 Jan 2025 09:28:59 +0000 (17:28 +0800)]
nbd: don't allow reconnect after disconnect
Following process can cause nbd_config UAF:
1) grab nbd_config temporarily;
2) nbd_genl_disconnect() flush all recv_work() and release the
initial reference:
nbd_genl_disconnect
nbd_disconnect_and_put
nbd_disconnect
flush_workqueue(nbd->recv_workq)
if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF, ...))
nbd_config_put
-> due to step 1), reference is still not zero
Christoph Hellwig [Mon, 6 Jan 2025 08:35:10 +0000 (09:35 +0100)]
block: remove BLK_MQ_F_NO_SCHED
The only queues that really can't support a scheduler are those that
do not have a gendisk associated with them, and thus can't be used for
non-passthrough commands. In addition to those null_blk can optionally
set the flag, which is a bad odd. Replace the null_blk usage with
BLK_MQ_F_NO_SCHED_BY_DEFAULT to keep the expected semantics and then
remove BLK_MQ_F_NO_SCHED as the non-disk queues never call into
elevator_init_mq or blk_register_queue which adds the sysfs attributes.
Christoph Hellwig [Mon, 6 Jan 2025 08:35:08 +0000 (09:35 +0100)]
block: better split mq vs non-mq code in add_disk_fwnode
Add a big conditional for blk-mq vs not mq at the beginning of
add_disk_fwnode so that elevator_init_mq is only called for blk-mq disks,
and add checks that the right methods or set or not set based on the
queue type.
Christoph Hellwig [Mon, 6 Jan 2025 08:15:29 +0000 (09:15 +0100)]
block: add a dma mapping iterator
blk_rq_map_sg is maze of nested loops. Untangle it by creating an
iterator that returns [paddr,len] tuples for DMA mapping, and then
implement the DMA logic on top of this. This not only removes code
at the source level, but also generates nicer binary code:
$ size block/blk-merge.o.*
text data bss dec hex filename
10001 432 0 10433 28c1 block/blk-merge.o.new
10317 468 0 10785 2a21 block/blk-merge.o.old
Last but not least it will be used as a building block for a new
DMA mapping helper that doesn't rely on struct scatterlist.
Christoph Hellwig [Fri, 3 Jan 2025 07:33:58 +0000 (08:33 +0100)]
block: remove blk_rq_bio_prep
There is not real point in a helper just to assign three values to four
fields, especially when the surrounding code is working on the
neighbor fields directly.
Christoph Hellwig [Fri, 3 Jan 2025 07:33:57 +0000 (08:33 +0100)]
block: remove bio_add_pc_page
Lift bio_split_rw_at into blk_rq_append_bio so that it validates the
hardware limits. With this all passthrough callers can simply add
bio_add_page to build the bio and delay checking for exceeding of limits
to this point instead of doing it for each page.
While this looks like adding a new expensive loop over all bio_vecs,
blk_rq_append_bio is already doing that just to counter the number of
segments.
Before commit e418de3abcda ("block: switch gendisk lookup to a simple
xarray"), lookup_gendisk will first use base_probe to load module loop,
and then the retry will call loop_probe to prepare the loop disk. Finally
open for this disk will success. However, after this commit, we lose the
retry logic, and open will fail with ENXIO. Block device autoloading is
deprecated and will be removed soon, but maybe we should keep open success
until we really remove it. So, give a retry to fix it.
Fixes: e418de3abcda ("block: switch gendisk lookup to a simple xarray") Suggested-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Yang Erkun <yangerkun@huawei.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20241209110435.3670985-1-yangerkun@huaweicloud.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Thomas Weißschuh [Thu, 2 Jan 2025 12:01:34 +0000 (13:01 +0100)]
kyber: constify sysfs attributes
The elevator core now allows instances of 'struct elv_fs_entry' to be
moved into read-only memory. Make use of that to protect them against
accidental or malicious modifications.
Thomas Weißschuh [Thu, 2 Jan 2025 12:01:33 +0000 (13:01 +0100)]
block, bfq: constify sysfs attributes
The elevator core now allows instances of 'struct elv_fs_entry' to be
moved into read-only memory. Make use of that to protect them against
accidental or malicious modifications.
Thomas Weißschuh [Thu, 2 Jan 2025 12:01:32 +0000 (13:01 +0100)]
block: mq-deadline: Constify sysfs attributes
The elevator core now allows instances of 'struct elv_fs_entry' to be
moved into read-only memory. Make use of that to protect them against
accidental or malicious modifications.
Thomas Weißschuh [Thu, 2 Jan 2025 12:01:31 +0000 (13:01 +0100)]
elevator: Enable const sysfs attributes
The elevator core does not need to modify the sysfs attributes added by
the elevators. Reflect this in the types, so the attributes can be moved
into read-only memory.
Christoph Hellwig [Thu, 19 Dec 2024 06:01:59 +0000 (07:01 +0100)]
block: remove BLK_MQ_F_SHOULD_MERGE
BLK_MQ_F_SHOULD_MERGE is set for all tag_sets except those that purely
process passthrough commands (bsg-lib, ufs tmf, various nvme admin
queues) and thus don't even check the flag. Remove it to simplify the
driver interface.
Daniel Wagner [Mon, 2 Dec 2024 14:00:16 +0000 (15:00 +0100)]
blk-mq: remove unused queue mapping helpers
There are no users left of the pci and virtio queue mapping helpers.
Thus remove them.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-8-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:15 +0000 (15:00 +0100)]
virtio: blk/scsi: replace blk_mq_virtio_map_queues with blk_mq_map_hw_queues
Replace all users of blk_mq_virtio_map_queues with the more generic
blk_mq_map_hw_queues. This in preparation to retire
blk_mq_virtio_map_queues.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-7-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:14 +0000 (15:00 +0100)]
nvme: replace blk_mq_pci_map_queues with blk_mq_map_hw_queues
Replace all users of blk_mq_pci_map_queues with the more generic
blk_mq_map_hw_queues. This in preparation to retire
blk_mq_pci_map_queues.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-6-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:13 +0000 (15:00 +0100)]
scsi: replace blk_mq_pci_map_queues with blk_mq_map_hw_queues
Replace all users of blk_mq_pci_map_queues with the more generic
blk_mq_map_hw_queues. This in preparation to retire
blk_mq_pci_map_queues.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-5-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:12 +0000 (15:00 +0100)]
blk-mq: introduce blk_mq_map_hw_queues
blk_mq_pci_map_queues and blk_mq_virtio_map_queues will create a CPU to
hardware queue mapping based on affinity information. These two function
share common code and only differ on how the affinity information is
retrieved. Also, those functions are located in the block subsystem
where it doesn't really fit in. They are virtio and pci subsystem
specific.
Thus introduce provide a generic mapping function which uses the
irq_get_affinity callback from bus_type.
Originally idea from Ming Lei <ming.lei@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-4-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:11 +0000 (15:00 +0100)]
virtio: hookup irq_get_affinity callback
struct bus_type has a new callback for retrieving the IRQ affinity for a
device. Hook this callback up for virtio based devices.
Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-3-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:10 +0000 (15:00 +0100)]
PCI: hookup irq_get_affinity callback
struct bus_type has a new callback for retrieving the IRQ affinity for a
device. Hook this callback up for PCI based devices.
Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: John Garry <john.g.garry@oracle.com> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-2-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Daniel Wagner [Mon, 2 Dec 2024 14:00:09 +0000 (15:00 +0100)]
driver core: bus: add irq_get_affinity callback to bus_type
Introducing a callback in struct bus_type so that a subsystem
can hook up the getters directly. This approach avoids exposing
random getters in any subsystems APIs.
Acked-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Daniel Wagner <wagi@kernel.org> Link: https://lore.kernel.org/r/20241202-refactor-blk-affinity-helpers-v6-1-27211e9c2cd5@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Colin Ian King [Wed, 4 Dec 2024 15:04:50 +0000 (15:04 +0000)]
blktrace: remove redundant return at end of function
A recent change added return 0 before an existing return statement
at the end of function blk_trace_setup. The final return is now
redundant, so remove it.
John Garry [Mon, 2 Dec 2024 11:19:57 +0000 (11:19 +0000)]
block: Delete bio_set_prio()
Since commit 43b62ce3ff0a ("block: move bio io prio to a new field"), macro
bio_set_prio() does nothing but set bio->bi_ioprio. All other places just
set bio->bi_ioprio directly, so replace bio_set_prio() remaining
callsites with setting bio->bi_ioprio directly and delete that macro.
John Garry [Mon, 2 Dec 2024 11:19:56 +0000 (11:19 +0000)]
block: Delete bio_prio()
Since commit 43b62ce3ff0a ("block: move bio io prio to a new field"), macro
bio_prio() does nothing but return the value in bio->bi_ioprio. Most other
places just read bio->bi_ioprio directly, so replace bi_ioprio() callsites
with reading bio->bi_ioprio directly and delete that macro.
Ming Lei [Thu, 28 Nov 2024 12:50:27 +0000 (20:50 +0800)]
blktrace: move copy_[to|from]_user() out of ->debugfs_lock
Move copy_[to|from]_user() out of ->debugfs_lock and cut the dependency
between mm->mmap_lock and q->debugfs_lock, then we avoids lots of
lockdep false positive warning. Obviously ->debug_lock isn't needed
for copy_[to|from]_user().
The only behavior change is to call blk_trace_remove() in case of setup
failure handling by re-grabbing ->debugfs_lock, and this way is just
fine since we do cover concurrent setup() & remove().
Reported-by: syzbot+91585b36b538053343e4@syzkaller.appspotmail.com Closes: https://lore.kernel.org/linux-block/67450fd4.050a0220.1286eb.0007.GAE@google.com/ Closes: https://lore.kernel.org/linux-block/6742e584.050a0220.1cc393.0038.GAE@google.com/ Closes: https://lore.kernel.org/linux-block/6742a600.050a0220.1cc393.002e.GAE@google.com/ Closes: https://lore.kernel.org/linux-block/67420102.050a0220.1cc393.0019.GAE@google.com/ Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://lore.kernel.org/r/20241128125029.4152292-3-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Damien Le Moal [Tue, 26 Nov 2024 00:09:56 +0000 (09:09 +0900)]
null_blk: Add rotational feature support
To facilitate testing of kernel functions related to the rotational
feature (BLK_FEAT_ROTATIONAL) of a block device (e.g. NVMe rotational
bit support), add the rotational boolean configfs attribute and module
parameter to the null_blk driver. If set, a null block device will
report being a rotational device through it queue limits features with
the BLK_FEAT_ROTATIONAL flag.
Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Hannes Reinecke <hare@suse.de> Link: https://lore.kernel.org/r/20241126000956.95983-1-dlemoal@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
Ming Lei [Wed, 27 Nov 2024 13:51:30 +0000 (21:51 +0800)]
block: track queue dying state automatically for modeling queue freeze lockdep
Now we only verify the outmost freeze & unfreeze in current context in case
that !q->mq_freeze_depth, so it is reliable to save queue lying state when
we want to lock the freeze queue since the state is one per-task variable
now.
Ming Lei [Wed, 27 Nov 2024 13:51:29 +0000 (21:51 +0800)]
block: don't verify queue freeze manually in elevator_init_mq()
Now blk_freeze_queue_start() can track disk state automatically, and
it isn't necessary to verify queue freeze manually in elevator_init_mq()
any more.
Ming Lei [Wed, 27 Nov 2024 13:51:28 +0000 (21:51 +0800)]
block: track disk DEAD state automatically for modeling queue freeze lockdep
Now we only verify the outmost freeze & unfreeze in current context in case
that !q->mq_freeze_depth, so it is reliable to save disk DEAD state when
we want to lock the freeze queue since the state is one per-task variable
now.
Doing this way can kill lots of false positive when freeze queue is
called before adding disk[1].
Linus Torvalds [Sun, 22 Dec 2024 20:16:41 +0000 (12:16 -0800)]
Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm
Pull KVM x86 fixes from Paolo Bonzini:
- Disable AVIC on SNP-enabled systems that don't allow writes to the
virtual APIC page, as such hosts will hit unexpected RMP #PFs in the
host when running VMs of any flavor.
- Fix a WARN in the hypercall completion path due to KVM trying to
determine if a guest with protected register state is in 64-bit mode
(KVM's ABI is to assume such guests only make hypercalls in 64-bit
mode).
- Allow the guest to write to supported bits in MSR_AMD64_DE_CFG to fix
a regression with Windows guests, and because KVM's read-only
behavior appears to be entirely made up.
- Treat TDP MMU faults as spurious if the faulting access is allowed
given the existing SPTE. This fixes a benign WARN (other than the
WARN itself) due to unexpectedly replacing a writable SPTE with a
read-only SPTE.
- Emit a warning when KVM is configured with ignore_msrs=1 and also to
hide the MSRs that the guest is looking for from the kernel logs.
ignore_msrs can trick guests into assuming that certain processor
features are present, and this in turn leads to bogus bug reports.
* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm:
KVM: x86: let it be known that ignore_msrs is a bad idea
KVM: VMX: don't include '<linux/find.h>' directly
KVM: x86/mmu: Treat TDP MMU faults as spurious if access is already allowed
KVM: SVM: Allow guest writes to set MSR_AMD64_DE_CFG bits
KVM: x86: Play nice with protected guests in complete_hypercall_exit()
KVM: SVM: Disable AVIC on SNP-enabled system without HvInUseWrAllowed feature
Paolo Bonzini [Sun, 22 Dec 2024 17:07:16 +0000 (12:07 -0500)]
Merge tag 'kvm-x86-fixes-6.13-rcN' of https://github.com/kvm-x86/linux into HEAD
KVM x86 fixes for 6.13:
- Disable AVIC on SNP-enabled systems that don't allow writes to the virtual
APIC page, as such hosts will hit unexpected RMP #PFs in the host when
running VMs of any flavor.
- Fix a WARN in the hypercall completion path due to KVM trying to determine
if a guest with protected register state is in 64-bit mode (KVM's ABI is to
assume such guests only make hypercalls in 64-bit mode).
- Allow the guest to write to supported bits in MSR_AMD64_DE_CFG to fix a
regression with Windows guests, and because KVM's read-only behavior appears
to be entirely made up.
- Treat TDP MMU faults as spurious if the faulting access is allowed given the
existing SPTE. This fixes a benign WARN (other than the WARN itself) due to
unexpectedly replacing a writable SPTE with a read-only SPTE.
Paolo Bonzini [Thu, 19 Dec 2024 12:43:20 +0000 (07:43 -0500)]
KVM: x86: let it be known that ignore_msrs is a bad idea
When running KVM with ignore_msrs=1 and report_ignored_msrs=0, the user has
no clue that that the guest is being lied to. This may cause bug reports
such as https://gitlab.com/qemu-project/qemu/-/issues/2571, where enabling
a CPUID bit in QEMU caused Linux guests to try reading MSR_CU_DEF_ERR; and
being lied about the existence of MSR_CU_DEF_ERR caused the guest to assume
other things about the local APIC which were not true:
Sep 14 12:02:53 kernel: mce: [Firmware Bug]: Your BIOS is not setting up LVT offset 0x2 for deferred error IRQs correctly.
Sep 14 12:02:53 kernel: unchecked MSR access error: RDMSR from 0x852 at rIP: 0xffffffffb548ffa7 (native_read_msr+0x7/0x40)
Sep 14 12:02:53 kernel: Call Trace:
...
Sep 14 12:02:53 kernel: native_apic_msr_read+0x20/0x30
Sep 14 12:02:53 kernel: setup_APIC_eilvt+0x47/0x110
Sep 14 12:02:53 kernel: mce_amd_feature_init+0x485/0x4e0
...
Sep 14 12:02:53 kernel: [Firmware Bug]: cpu 0, try to use APIC520 (LVT offset 2) for vector 0xf4, but the register is already in use for vector 0x0 on this cpu
Without reported_ignored_msrs=0 at least the host kernel log will contain
enough information to avoid going on a wild goose chase. But if reports
about individual MSR accesses are being silenced too, at least complain
loudly the first time a VM is started.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Wolfram Sang [Tue, 17 Dec 2024 07:05:40 +0000 (08:05 +0100)]
KVM: VMX: don't include '<linux/find.h>' directly
The header clearly states that it does not want to be included directly,
only via '<linux/bitmap.h>'. Replace the include accordingly.
Signed-off-by: Wolfram Sang <wsa+renesas@sang-engineering.com>
Message-ID: <20241217070539.2433-2-wsa+renesas@sang-engineering.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Linus Torvalds [Sun, 22 Dec 2024 16:40:23 +0000 (08:40 -0800)]
Merge tag 'devicetree-fixes-for-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
Pull devicetree fixes from Rob Herring:
- Disable #address-cells/#size-cells warning on coreboot (Chromebooks)
platforms
- Add missing root #address-cells/#size-cells in default empty DT
- Fix uninitialized variable in of_irq_parse_one()
- Fix interrupt-map cell length check in of_irq_parse_imap_parent()
- Fix refcount handling in __of_get_dma_parent()
- Fix error path in of_parse_phandle_with_args_map()
- Fix dma-ranges handling with flags cells
- Drop explicit fw_devlink handling of 'interrupt-parent'
- Fix "compression" typo in fixed-partitions binding
- Unify "fsl,liodn" property type definitions
* tag 'devicetree-fixes-for-6.13-1' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
of: Add coreboot firmware to excluded default cells list
of/irq: Fix using uninitialized variable @addr_len in API of_irq_parse_one()
of/irq: Fix interrupt-map cell length check in of_irq_parse_imap_parent()
of: Fix refcount leakage for OF node returned by __of_get_dma_parent()
of: Fix error path in of_parse_phandle_with_args_map()
dt-bindings: mtd: fixed-partitions: Fix "compression" typo
of: Add #address-cells/#size-cells in the device-tree root empty node
dt-bindings: Unify "fsl,liodn" type definitions
of: address: Preserve the flags portion on 1:1 dma-ranges mapping
of/unittest: Add empty dma-ranges address translation tests
of: property: fw_devlink: Do not use interrupt-parent directly
Linus Torvalds [Sat, 21 Dec 2024 23:45:06 +0000 (15:45 -0800)]
Merge tag 'soc-fixes-6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc
Pull SoC fixes from Arnd Bergmann:
"Two more small fixes, correcting the cacheline size on Raspberry Pi 5
and fixing a logic mistake in the microchip mpfs firmware driver"
* tag 'soc-fixes-6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc:
arm64: dts: broadcom: Fix L2 linesize for Raspberry Pi 5
firmware: microchip: fix UL_IAP lock check in mpfs_auto_update_state()
Linus Torvalds [Sat, 21 Dec 2024 23:31:56 +0000 (15:31 -0800)]
Merge tag 'mm-hotfixes-stable-2024-12-21-12-09' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull misc fixes from Andrew Morton:
"25 hotfixes. 16 are cc:stable. 19 are MM and 6 are non-MM.
The usual bunch of singletons and doubletons - please see the relevant
changelogs for details"
* tag 'mm-hotfixes-stable-2024-12-21-12-09' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (25 commits)
mm: huge_memory: handle strsep not finding delimiter
alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG
alloc_tag: fix module allocation tags populated area calculation
mm/codetag: clear tags before swap
mm/vmstat: fix a W=1 clang compiler warning
mm: convert partially_mapped set/clear operations to be atomic
nilfs2: fix buffer head leaks in calls to truncate_inode_pages()
vmalloc: fix accounting with i915
mm/page_alloc: don't call pfn_to_page() on possibly non-existent PFN in split_large_buddy()
fork: avoid inappropriate uprobe access to invalid mm
nilfs2: prevent use of deleted inode
zram: fix uninitialized ZRAM not releasing backing device
zram: refuse to use zero sized block device as backing device
mm: use clear_user_(high)page() for arch with special user folio handling
mm: introduce cpu_icache_is_aliasing() across all architectures
mm: add RCU annotation to pte_offset_map(_lock)
mm: correctly reference merged VMA
mm: use aligned address in copy_user_gigantic_page()
mm: use aligned address in clear_gigantic_page()
mm: shmem: fix ShmemHugePages at swapout
...
It appears that some modules call the function nec7210_board_reset()
that is defined in nec7210.c. In an allyesconfig build, these other
modules are built in. But the file that holds nec7210_board_reset()
has:
obj-m += nec7210.o
Where that "-m" means it only gets built as a module. With the other
modules built in, they have no access to nec7210_board_reset() and the build
fails.
This isn't the only function. After fixing that one, I hit another:
Where push_gpib_event() was also used outside of the file it was defined
in, and that file too only was built as a module.
Since the directory that nec7210.c is only traversed when
CONFIG_GPIB_NEC7210 is set, and the directory with gpib_common.c is only
traversed when CONFIG_GPIB_COMMON is set, use those configs as the
option to build those modules. When it is an allyesconfig, then they
will both be built in and their functions will be available to the other
modules that are also built in.
Linus Torvalds [Sat, 21 Dec 2024 19:24:32 +0000 (11:24 -0800)]
Merge tag 'kbuild-fixes-v6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- Remove stale code in usr/include/headers_check.pl
- Fix issues in the user-mode-linux Debian package
- Fix false-positive "export twice" errors in modpost
* tag 'kbuild-fixes-v6.13-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
modpost: distinguish same module paths from different dump files
kbuild: deb-pkg: Do not install maint scripts for arch 'um'
kbuild: deb-pkg: add debarch for ARCH=um
kbuild: Drop support for include/asm-<arch> in headers_check.pl
Linus Torvalds [Sat, 21 Dec 2024 19:07:19 +0000 (11:07 -0800)]
Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Pull BPF fixes from Daniel Borkmann:
- Fix inlining of bpf_get_smp_processor_id helper for !CONFIG_SMP
systems (Andrea Righi)
- Fix BPF USDT selftests helper code to use asm constraint "m" for
LoongArch (Tiezhu Yang)
- Fix BPF selftest compilation error in get_uprobe_offset when
PROCMAP_QUERY is not defined (Jerome Marchand)
- Fix BPF bpf_skb_change_tail helper when used in context of BPF
sockmap to handle negative skb header offsets (Cong Wang)
- Several fixes to BPF sockmap code, among others, in the area of
socket buffer accounting (Levi Zim, Zijian Zhang, Cong Wang)
* tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
selftests/bpf: Test bpf_skb_change_tail() in TC ingress
selftests/bpf: Introduce socket_helpers.h for TC tests
selftests/bpf: Add a BPF selftest for bpf_skb_change_tail()
bpf: Check negative offsets in __bpf_skb_min_len()
tcp_bpf: Fix copied value in tcp_bpf_sendmsg
skmsg: Return copied bytes in sk_msg_memcopy_from_iter
tcp_bpf: Add sk_rmem_alloc related logic for tcp_bpf ingress redirection
tcp_bpf: Charge receive socket buffer in bpf_tcp_ingress()
selftests/bpf: Fix compilation error in get_uprobe_offset()
selftests/bpf: Use asm constraint "m" for LoongArch
bpf: Fix bpf_get_smp_processor_id() on !CONFIG_SMP
Linus Torvalds [Sat, 21 Dec 2024 18:56:34 +0000 (10:56 -0800)]
Merge tag 'media/v6.13-3' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media fixes from Mauro Carvalho Chehab:
- fix a clang build issue with mediatec vcodec
- add missing variable initialization to dib3000mb write function
* tag 'media/v6.13-3' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: mediatek: vcodec: mark vdec_vp9_slice_map_counts_eob_coef noinline
media: dvb-frontends: dib3000mb: fix uninit-value in dib3000_write_reg
Linus Torvalds [Sat, 21 Dec 2024 18:51:04 +0000 (10:51 -0800)]
Merge tag 'pci-v6.13-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Pull PCI fixes from Krzysztof Wilczyński:
"Two small patches that are important for fixing boot time hang on
Intel JHL7540 'Titan Ridge' platforms equipped with a Thunderbolt
controller.
The boot time issue manifests itself when a PCI Express bandwidth
control is unnecessarily enabled on the Thunderbolt controller
downstream ports, which only supports a link speed of 2.5 GT/s in
accordance with USB4 v2 specification (p. 671, sec. 11.2.1, "PCIe
Physical Layer Logical Sub-block").
As such, there is no need to enable bandwidth control on such
downstream port links, which also works around the issue.
Both patches were tested by the original reporter on the hardware on
which the failure origin golly manifested itself. Both fixes were
proven to resolve the reported boot hang issue, and both patches have
been in linux-next this week with no reported problems"
* tag 'pci-v6.13-fixes-2' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci:
PCI/bwctrl: Enable only if more than one speed is supported
PCI: Honor Max Link Speed when determining supported speeds
Linus Torvalds [Sat, 21 Dec 2024 18:47:47 +0000 (10:47 -0800)]
Merge tag 'pm-6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management fixes from Rafael Wysocki:
"These fix some amd-pstate driver issues:
- Detect preferred core support in amd-pstate before driver
registration to avoid initialization ordering issues (K Prateek
Nayak)
- Fix issues with with boost numerator handling in amd-pstate leading
to inconsistently programmed CPPC max performance values (Mario
Limonciello)"
* tag 'pm-6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
cpufreq/amd-pstate: Use boost numerator for upper bound of frequencies
cpufreq/amd-pstate: Store the boost numerator as highest perf again
cpufreq/amd-pstate: Detect preferred core support before driver registration
Linus Torvalds [Sat, 21 Dec 2024 18:44:44 +0000 (10:44 -0800)]
Merge tag 'thermal-6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull thermal control fixes from Rafael Wysocki:
"Fix two issues with the user thermal thresholds feature introduced in
this development cycle (Daniel Lezcano)"
* tag 'thermal-6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm:
thermal/thresholds: Fix boundaries and detection routine
thermal/thresholds: Fix uapi header macros leading to a compilation error
Linus Torvalds [Sat, 21 Dec 2024 17:35:18 +0000 (09:35 -0800)]
Merge tag '6.13-rc3-SMB3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6
Pull smb client fixes from Steve French:
- fix regression in display of write stats
- fix rmmod failure with network namespaces
- two minor cleanups
* tag '6.13-rc3-SMB3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
smb: fix bytes written value in /proc/fs/cifs/Stats
smb: client: fix TCP timers deadlock after rmmod
smb: client: Deduplicate "select NETFS_SUPPORT" in Kconfig
smb: use macros instead of constants for leasekey size and default cifsattrs value
Linus Torvalds [Sat, 21 Dec 2024 17:32:24 +0000 (09:32 -0800)]
Merge tag 'nfs-for-6.13-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
Pull NFS client fixes from Trond Myklebust:
- NFS/pnfs: Fix a live lock between recalled layouts and layoutget
- Fix a build warning about an undeclared symbol 'nfs_idmap_cache_timeout'
* tag 'nfs-for-6.13-2' of git://git.linux-nfs.org/projects/trondmy/linux-nfs:
fs/nfs: fix missing declaration of nfs_idmap_cache_timeout
NFS/pnfs: Fix a live lock between recalled layouts and layoutget
Linus Torvalds [Sat, 21 Dec 2024 17:29:46 +0000 (09:29 -0800)]
Merge tag 'ceph-for-6.13-rc4' of https://github.com/ceph/ceph-client
Pull ceph fixes from Ilya Dryomov:
"A handful of important CephFS fixes from Max, Alex and myself: memory
corruption due to a buffer overrun, potential infinite loop and
several memory leaks on the error paths. All but one marked for
stable"
* tag 'ceph-for-6.13-rc4' of https://github.com/ceph/ceph-client:
ceph: allocate sparse_ext map only for sparse reads
ceph: fix memory leak in ceph_direct_read_write()
ceph: improve error handling and short/overflow-read logic in __ceph_sync_read()
ceph: validate snapdirname option length when mounting
ceph: give up on paths longer than PATH_MAX
ceph: fix memory leaks in __ceph_sync_read()
Masahiro Yamada [Thu, 12 Dec 2024 15:46:15 +0000 (00:46 +0900)]
modpost: distinguish same module paths from different dump files
Since commit 13b25489b6f8 ("kbuild: change working directory to external
module directory with M="), module paths are always relative to the top
of the external module tree.
The module paths recorded in Module.symvers are no longer globally unique
when they are passed via KBUILD_EXTRA_SYMBOLS for building other external
modules, which may result in false-positive "exported twice" errors.
Such errors should not occur because external modules should be able to
override in-tree modules.
To address this, record the dump file path in struct module and check it
when searching for a module.
Fixes: 13b25489b6f8 ("kbuild: change working directory to external module directory with M=") Reported-by: Jon Hunter <jonathanh@nvidia.com> Closes: https://lore.kernel.org/all/eb21a546-a19c-40df-b821-bbba80f19a3d@nvidia.com/ Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Tested-by: Jon Hunter <jonathanh@nvidia.com>
Nicolas Schier [Thu, 12 Dec 2024 13:05:29 +0000 (14:05 +0100)]
kbuild: deb-pkg: Do not install maint scripts for arch 'um'
Stop installing Debian maintainer scripts when building a
user-mode-linux Debian package.
Debian maintainer scripts are used for e.g. requesting rebuilds of
initrd, rebuilding DKMS modules and updating of grub configuration. As
all of this is not relevant for UML but also may lead to failures while
processing the kernel hooks, do no more install maintainer scripts for
the UML package.
Masahiro Yamada [Tue, 3 Dec 2024 11:14:45 +0000 (20:14 +0900)]
kbuild: deb-pkg: add debarch for ARCH=um
'make ARCH=um bindeb-pkg' shows the following warning.
$ make ARCH=um bindeb-pkg
[snip]
GEN debian
** ** ** WARNING ** ** **
Your architecture doesn't have its equivalent
Debian userspace architecture defined!
Falling back to the current host architecture (amd64).
Please add support for um to ./scripts/package/mkdebian ...
This commit hard-codes i386/amd64 because UML is only supported for x86.
Signed-off-by: Masahiro Yamada <masahiroy@kernel.org> Reviewed-by: Nicolas Schier <nicolas@fjasle.eu>
Geert Uytterhoeven [Thu, 5 Dec 2024 13:20:43 +0000 (14:20 +0100)]
kbuild: Drop support for include/asm-<arch> in headers_check.pl
"include/asm-<arch>" was replaced by "arch/<arch>/include/asm" a long
time ago. All assembler header files are now included using
"#include <asm/*>", so there is no longer a need to rewrite paths.
Cong Wang [Fri, 13 Dec 2024 03:40:55 +0000 (19:40 -0800)]
selftests/bpf: Add a BPF selftest for bpf_skb_change_tail()
As requested by Daniel, we need to add a selftest to cover
bpf_skb_change_tail() cases in skb_verdict. Here we test trimming,
growing and error cases, and validate its expected return values and the
expected sizes of the payload.
Cong Wang [Fri, 13 Dec 2024 03:40:54 +0000 (19:40 -0800)]
bpf: Check negative offsets in __bpf_skb_min_len()
skb_network_offset() and skb_transport_offset() can be negative when
they are called after we pull the transport header, for example, when
we use eBPF sockmap at the point of ->sk_data_ready().
__bpf_skb_min_len() uses an unsigned int to get these offsets, this
leads to a very large number which then causes bpf_skb_change_tail()
failed unexpectedly.
Fix this by using a signed int to get these offsets and ensure the
minimum is at least zero.
This is caused by tcp_bpf_sendmsg() returning a larger value(12289) than
size (8192), which causes the while loop in splice_to_socket() to release
an uninitialized pipe buf.
The underlying cause is that this code assumes sk_msg_memcopy_from_iter()
will copy all bytes upon success but it actually might only copy part of
it.
This commit changes it to use the real copied bytes.
Linus Torvalds [Fri, 20 Dec 2024 21:48:41 +0000 (13:48 -0800)]
Merge tag 'hwmon-for-v6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
Pull hwmon fixes from Guenter Roeck:
- Fix reporting of negative temperature, current, and voltage values in
the tmp513 driver
* tag 'hwmon-for-v6.13-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging:
hwmon: (tmp513) Fix interpretation of values of Temperature Result and Limit Registers
hwmon: (tmp513) Fix Current Register value interpretation
hwmon: (tmp513) Fix interpretation of values of Shunt Voltage and Limit Registers
Rob Herring (Arm) [Fri, 20 Dec 2024 21:06:47 +0000 (15:06 -0600)]
of: Add coreboot firmware to excluded default cells list
Google Juniper and other Chromebook platforms have a very old bootloader
which populates /firmware node without proper address/size-cells leading
to warnings:
Missing '#address-cells' in /firmware
WARNING: CPU: 0 PID: 1 at drivers/of/base.c:106 of_bus_n_addr_cells+0x90/0xf0
Modules linked in:
CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.12.0 #1 933ab9971ff4d5dc58cb378a96f64c7f72e3454d
Hardware name: Google juniper sku16 board (DT)
...
Missing '#size-cells' in /firmware
WARNING: CPU: 0 PID: 1 at drivers/of/base.c:133 of_bus_n_size_cells+0x90/0xf0
Modules linked in:
CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Tainted: G W 6.12.0 #1 933ab9971ff4d5dc58cb378a96f64c7f72e3454d
Tainted: [W]=WARN
Hardware name: Google juniper sku16 board (DT)
These platform won't receive updated bootloader/firmware, so add an
exclusion for platforms with a "coreboot" compatible node. While this is
wider than necessary, that's the easiest fix and it doesn't doesn't
matter if we miss checking other platforms using coreboot.
We may revisit this later and address with a fixup to the DT itself.
Reported-by: Sasha Levin <sashal@kernel.org> Closes: https://lore.kernel.org/all/Z0NUdoG17EwuCigT@sashalap/ Cc: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Chen-Yu Tsai <wenst@chromium.org> Cc: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
Linus Torvalds [Fri, 20 Dec 2024 21:37:58 +0000 (13:37 -0800)]
Merge tag 'block-6.13-20241220' of git://git.kernel.dk/linux
Pull block fixes from Jens Axboe:
- Minor cleanups for bdev/nvme using the helpers introduced
- Revert of a deadlock fix that still needs more work
- Fix a UAF of hctx in the cpu hotplug code
* tag 'block-6.13-20241220' of git://git.kernel.dk/linux:
block: avoid to reuse `hctx` not removed from cpuhp callback list
block: Revert "block: Fix potential deadlock while freezing queue and acquiring sysfs_lock"
nvme: use blk_validate_block_size() for max LBA check
block/bdev: use helper for max block size check
Linus Torvalds [Fri, 20 Dec 2024 21:32:43 +0000 (13:32 -0800)]
Merge tag 'io_uring-6.13-20241220' of git://git.kernel.dk/linux
Pull io_uring fixes from Jens Axboe:
- Fix for a file ref leak for registered ring fds
- Turn the ->timeout_lock into a raw spinlock, as it nests under the
io-wq lock which is a raw spinlock as it's called from the scheduler
side
- Limit ring resizing to DEFER_TASKRUN for now. We will broaden this in
the future, but for now, ensure that it's only feasible on rings with
a single user
- Add sanity check for io-wq enqueuing
* tag 'io_uring-6.13-20241220' of git://git.kernel.dk/linux:
io_uring: check if iowq is killed before queuing
io_uring/register: limit ring resizing to DEFER_TASKRUN
io_uring: Fix registered ring file refcount leak
io_uring: make ctx->timeout_lock a raw spinlock