]> www.infradead.org Git - users/hch/block.git/log
users/hch/block.git
2 years agoblock: mark blk_put_queue as potentially blocking block-queue-kobj
Christoph Hellwig [Sat, 5 Nov 2022 08:07:36 +0000 (09:07 +0100)]
block: mark blk_put_queue as potentially blocking

We can't just say that the last reference release may block, as any
reference dropped could be the last one.  So move the might_sleep() from
blk_free_queue to blk_put_queue and update the documentation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2 years agoblock: untangle request_queue refcounting from sysfs
Christoph Hellwig [Thu, 10 Nov 2022 11:00:20 +0000 (12:00 +0100)]
block: untangle request_queue refcounting from sysfs

The kobject embedded into the request_queue is used for the queue
directory in sysfs, but that is a child of the gendisks directory and is
intimately tied to it.  Move this kobject to the gendisk and use a
refcount_t in the request_queue for the actual request_queue refcounting
that is completely unrelated to the device model.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2 years agoblock: fix error unwinding in blk_register_queue
Christoph Hellwig [Thu, 10 Nov 2022 10:42:43 +0000 (11:42 +0100)]
block: fix error unwinding in blk_register_queue

blk_register_queue fails to handle errors from blk_mq_sysfs_register,
leaks various resources on errors and accidentally sets queue refs percpu
refcount to percpu mode on kobject_add failure.  Fix all that by
properly unwinding on errors.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2 years agoblock: factor out a blk_debugfs_remove helper
Christoph Hellwig [Thu, 10 Nov 2022 11:01:42 +0000 (12:01 +0100)]
block: factor out a blk_debugfs_remove helper

Split the debugfs removal from blk_unregister_queue into a helper so that
the it can be reused for blk_register_queue error handling.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2 years agoblk-crypto: pass a gendisk to blk_crypto_sysfs_{,un}register
Christoph Hellwig [Sat, 21 May 2022 09:08:58 +0000 (11:08 +0200)]
blk-crypto: pass a gendisk to blk_crypto_sysfs_{,un}register

Prepare for changes to the block layer sysfs handling by passing the
readily available gendisk to blk_crypto_sysfs_{,un}register.

Signed-off-by: Christoph Hellwig <hch@lst.de>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Wed, 9 Nov 2022 19:46:50 +0000 (12:46 -0700)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  bfq: ignore oom_bfqq in bfq_check_waker
  bfq: fix waker_bfqq inconsistency crash
  drbd: Store op in drbd_peer_request
  drbd: disable discard support if granularity > max
  drbd: use blk_queue_max_discard_sectors helper

2 years agobfq: ignore oom_bfqq in bfq_check_waker
Khazhismel Kumykov [Tue, 8 Nov 2022 18:10:30 +0000 (10:10 -0800)]
bfq: ignore oom_bfqq in bfq_check_waker

oom_bfqq is just a fallback bfqq, so shouldn't be used with waker
detection.

Suggested-by: Jan Kara <jack@suse.cz>
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20221108181030.1611703-2-khazhy@google.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agobfq: fix waker_bfqq inconsistency crash
Khazhismel Kumykov [Tue, 8 Nov 2022 18:10:29 +0000 (10:10 -0800)]
bfq: fix waker_bfqq inconsistency crash

This fixes crashes in bfq_add_bfqq_busy due to waker_bfqq being NULL,
but woken_list_node still being hashed. This would happen when
bfq_init_rq() expects a brand new allocated queue to be returned from
bfq_get_bfqq_handle_split() and unconditionally updates waker_bfqq
without resetting woken_list_node. Since we can always return oom_bfqq
when attempting to allocate, we cannot assume waker_bfqq starts as NULL.

Avoid setting woken_bfqq for oom_bfqq entirely, as it's not useful.

Crashes would have a stacktrace like:
[160595.656560]  bfq_add_bfqq_busy+0x110/0x1ec
[160595.661142]  bfq_add_request+0x6bc/0x980
[160595.666602]  bfq_insert_request+0x8ec/0x1240
[160595.671762]  bfq_insert_requests+0x58/0x9c
[160595.676420]  blk_mq_sched_insert_request+0x11c/0x198
[160595.682107]  blk_mq_submit_bio+0x270/0x62c
[160595.686759]  __submit_bio_noacct_mq+0xec/0x178
[160595.691926]  submit_bio+0x120/0x184
[160595.695990]  ext4_mpage_readpages+0x77c/0x7c8
[160595.701026]  ext4_readpage+0x60/0xb0
[160595.705158]  filemap_read_page+0x54/0x114
[160595.711961]  filemap_fault+0x228/0x5f4
[160595.716272]  do_read_fault+0xe0/0x1f0
[160595.720487]  do_fault+0x40/0x1c8

Tested by injecting random failures into bfq_get_queue, crashes go away
completely.

Fixes: 8ef3fc3a043c ("block, bfq: make shared queues inherit wakers")
Signed-off-by: Khazhismel Kumykov <khazhy@google.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20221108181030.1611703-1-khazhy@google.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agodrbd: Store op in drbd_peer_request
Christoph Böhmwalder [Wed, 9 Nov 2022 13:34:53 +0000 (14:34 +0100)]
drbd: Store op in drbd_peer_request

(Sort of) cherry-picked from the out-of-tree drbd9 branch. Original
commit message by Joel Colledge:

    This simplifies drbd_submit_peer_request by removing most of the
    arguments. It also makes the treatment of the op better aligned with
    that in struct bio.

    Determine fault_type dynamically using information which is already
    available instead of passing it in as a parameter.

Note: The opf in receive_rs_deallocated was changed from
REQ_OP_WRITE_ZEROES to REQ_OP_DISCARD. This was required in the
out-of-tree module, and does not matter in-tree. The opf is ignored
anyway in drbd_submit_peer_request, since the discard/zero-out is
decided by the EE_TRIM flag.

Signed-off-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Link: https://lore.kernel.org/r/20221109133453.51652-4-christoph.boehmwalder@linbit.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agodrbd: disable discard support if granularity > max
Philipp Reisner [Wed, 9 Nov 2022 13:34:52 +0000 (14:34 +0100)]
drbd: disable discard support if granularity > max

The discard_granularity describes the minimum unit of a discard.
If that is larger than the maximal discard size, we need to disable
discards completely.

Reviewed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Philipp Reisner <philipp.reisner@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Link: https://lore.kernel.org/r/20221109133453.51652-3-christoph.boehmwalder@linbit.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agodrbd: use blk_queue_max_discard_sectors helper
Christoph Böhmwalder [Wed, 9 Nov 2022 13:34:51 +0000 (14:34 +0100)]
drbd: use blk_queue_max_discard_sectors helper

We currently only set q->limits.max_discard_sectors, but that is not
enough. Another field, max_hw_discard_sectors, was introduced in
commit 0034af036554 ("block: make /sys/block/<dev>/queue/discard_max_bytes
writeable").

The difference is that max_discard_sectors can be changed from user
space via sysfs, while max_hw_discard_sectors is the "hardware" upper
limit.

So use this helper, which sets both.

This is also a fixup for commit 998e9cbcd615 ("drbd: cleanup
decide_on_discard_support"): if discards are not supported, that does
not necessarily mean we also want to disable write_zeroes.

Fixes: 998e9cbcd615 ("drbd: cleanup decide_on_discard_support")
Reviewed-by: Joel Colledge <joel.colledge@linbit.com>
Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Link: https://lore.kernel.org/r/20221109133453.51652-2-christoph.boehmwalder@linbit.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branches 'for-6.2/block' and 'for-next' into for-next
Jens Axboe [Wed, 9 Nov 2022 18:34:08 +0000 (11:34 -0700)]
Merge branches 'for-6.2/block' and 'for-next' into for-next

* for-6.2/block:
  ABI: sysfs-bus-pci: add documentation for p2pmem allocate
  PCI/P2PDMA: Allow userspace VMA allocations through sysfs
  block: set FOLL_PCI_P2PDMA in bio_map_user_iov()
  block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()
  lib/scatterlist: add check when merging zone device pages
  block: add check when merging zone device pages
  iov_iter: introduce iov_iter_get_pages_[alloc_]flags()
  mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages
  mm: allow multiple error returns in try_grab_page()

* for-next:

2 years agoABI: sysfs-bus-pci: add documentation for p2pmem allocate
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:16 +0000 (11:41 -0600)]
ABI: sysfs-bus-pci: add documentation for p2pmem allocate

Add documentation for the p2pmem/allocate binary file which allows
for allocating p2pmem buffers in userspace for passing to drivers
that support them. (Currently only O_DIRECT to NVMe devices.)

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-10-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoPCI/P2PDMA: Allow userspace VMA allocations through sysfs
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:15 +0000 (11:41 -0600)]
PCI/P2PDMA: Allow userspace VMA allocations through sysfs

Create a sysfs bin attribute called "allocate" under the existing
"p2pmem" group. The only allowable operation on this file is the mmap()
call.

When mmap() is called on this attribute, the kernel allocates a chunk of
memory from the genalloc and inserts the pages into the VMA. The
dev_pagemap .page_free callback will indicate when these pages are no
longer used and they will be put back into the genalloc.

On device unbind, remove the sysfs file before the memremap_pages are
cleaned up. This ensures unmap_mapping_range() is called on the files
inode and no new mappings can be created.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-9-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: set FOLL_PCI_P2PDMA in bio_map_user_iov()
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:14 +0000 (11:41 -0600)]
block: set FOLL_PCI_P2PDMA in bio_map_user_iov()

When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be
passed from userspace and enables the NVMe passthru requests to
use P2PDMA pages.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-8-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:13 +0000 (11:41 -0600)]
block: set FOLL_PCI_P2PDMA in __bio_iov_iter_get_pages()

When a bio's queue supports PCI P2PDMA, set FOLL_PCI_P2PDMA for
iov_iter_get_pages_flags(). This allows PCI P2PDMA pages to be passed
from userspace and enables the O_DIRECT path in iomap based filesystems
and direct to block devices.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-7-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agolib/scatterlist: add check when merging zone device pages
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:12 +0000 (11:41 -0600)]
lib/scatterlist: add check when merging zone device pages

Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are not zone device pages or both pages are zone device
pages with the same pgmap.

Factor out the check for page mergability into a pages_are_mergable()
helper and add a check with zone_device_pages_are_mergeable().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-6-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: add check when merging zone device pages
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:11 +0000 (11:41 -0600)]
block: add check when merging zone device pages

Consecutive zone device pages should not be merged into the same sgl
or bvec segment with other types of pages or if they belong to different
pgmaps. Otherwise getting the pgmap of a given segment is not possible
without scanning the entire segment. This helper returns true either if
both pages are not zone device pages or both pages are zone device
pages with the same pgmap.

Add a helper to determine if zone device pages are mergeable and use
this helper in page_is_mergeable().

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-5-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoiov_iter: introduce iov_iter_get_pages_[alloc_]flags()
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:10 +0000 (11:41 -0600)]
iov_iter: introduce iov_iter_get_pages_[alloc_]flags()

Add iov_iter_get_pages_flags() and iov_iter_get_pages_alloc_flags()
which take a flags argument that is passed to get_user_pages_fast().

This is so that FOLL_PCI_P2PDMA can be passed when appropriate.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221021174116.7200-4-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agomm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:09 +0000 (11:41 -0600)]
mm: introduce FOLL_PCI_P2PDMA to gate getting PCI P2PDMA pages

GUP Callers that expect PCI P2PDMA pages can now set FOLL_PCI_P2PDMA to
allow obtaining P2PDMA pages. If GUP is called without the flag and a
P2PDMA page is found, it will return an error in try_grab_page() or
try_grab_folio().

The check is safe to do before taking the reference to the page in both
cases seeing the page should be protected by either the appropriate
ptl or mmap_lock; or the gup fast guarantees preventing TLB flushes.

try_grab_folio() has one call site that WARNs on failure and cannot
actually deal with the failure of this function (it seems it will
get into an infinite loop). Expand the comment there to document a
couple more conditions on why it will not fail.

FOLL_PCI_P2PDMA cannot be set if FOLL_LONGTERM is set. This is to copy
fsdax until pgmap refcounts are fixed (see the link below for more
information).

Link: https://lkml.kernel.org/r/Yy4Ot5MoOhsgYLTQ@ziepe.ca
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221021174116.7200-3-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agomm: allow multiple error returns in try_grab_page()
Logan Gunthorpe [Fri, 21 Oct 2022 17:41:08 +0000 (11:41 -0600)]
mm: allow multiple error returns in try_grab_page()

In order to add checks for P2PDMA memory into try_grab_page(), expand
the error return from a bool to an int/error code. Update all the
callsites handle change in usage.

Also remove the WARN_ON_ONCE() call at the callsites seeing there
already is a WARN_ON_ONCE() inside the function if it fails.

Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Dan Williams <dan.j.williams@intel.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221021174116.7200-2-logang@deltatee.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/io_uring' into for-next
Jens Axboe [Mon, 7 Nov 2022 20:18:22 +0000 (13:18 -0700)]
Merge branch 'for-6.2/io_uring' into for-next

* for-6.2/io_uring:
  io_uring: remove allow_overflow parameter
  io_uring: allow multishot recv CQEs to overflow
  io_uring: revert "io_uring: fix multishot poll on overflow"
  io_uring: revert "io_uring fix multishot accept ordering"
  io_uring: do not always force run task_work in io_uring_register

2 years agoio_uring: remove allow_overflow parameter
Dylan Yudaken [Mon, 7 Nov 2022 12:52:36 +0000 (04:52 -0800)]
io_uring: remove allow_overflow parameter

It is now always true, so just remove it

Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221107125236.260132-5-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: allow multishot recv CQEs to overflow
Dylan Yudaken [Mon, 7 Nov 2022 12:52:35 +0000 (04:52 -0800)]
io_uring: allow multishot recv CQEs to overflow

With commit aa1df3a360a0 ("io_uring: fix CQE reordering"), there are
stronger guarantees for overflow ordering. Specifically ensuring that
userspace will not receive out of order receive CQEs. Therefore this is
not needed any more for recv/recvmsg.

Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221107125236.260132-4-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: revert "io_uring: fix multishot poll on overflow"
Dylan Yudaken [Mon, 7 Nov 2022 12:52:34 +0000 (04:52 -0800)]
io_uring: revert "io_uring: fix multishot poll on overflow"

This is no longer needed after commit aa1df3a360a0 ("io_uring: fix CQE
reordering"), since all reordering is now taken care of.

This reverts commit a2da676376fe ("io_uring: fix multishot poll on
overflow").

Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221107125236.260132-3-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: revert "io_uring fix multishot accept ordering"
Dylan Yudaken [Mon, 7 Nov 2022 12:52:33 +0000 (04:52 -0800)]
io_uring: revert "io_uring fix multishot accept ordering"

This is no longer needed after commit aa1df3a360a0 ("io_uring: fix CQE
reordering"), since all reordering is now taken care of.

This reverts commit cbd25748545c ("io_uring: fix multishot accept
ordering").

Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221107125236.260132-2-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: do not always force run task_work in io_uring_register
Dylan Yudaken [Mon, 7 Nov 2022 12:33:49 +0000 (04:33 -0800)]
io_uring: do not always force run task_work in io_uring_register

Running task work when not needed can unnecessarily delay
operations. Specifically IORING_SETUP_DEFER_TASKRUN tries to avoid running
task work until the user requests it. Therefore do not run it in
io_uring_register any more.

The one catch is that io_rsrc_ref_quiesce expects it to have run in order
to process all outstanding references, and so reorder it's loop to do this.

Signed-off-by: Dylan Yudaken <dylany@meta.com>
Link: https://lore.kernel.org/r/20221107123349.4106213-1-dylany@meta.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Mon, 7 Nov 2022 14:21:58 +0000 (07:21 -0700)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  block: Fix some kernel-doc comments

2 years agoblock: Fix some kernel-doc comments
Yang Li [Mon, 7 Nov 2022 06:22:55 +0000 (14:22 +0800)]
block: Fix some kernel-doc comments

Remove the description of @required_features in elevator_match()
to clear the below warning:

block/elevator.c:103: warning: Excess function parameter 'required_features' description in 'elevator_match'

Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=2734
Fixes: ffb86425ee2c ("block: don't check for required features in elevator_match")
Reported-by: Abaci Robot <abaci@linux.alibaba.com>
Signed-off-by: Yang Li <yang.lee@linux.alibaba.com>
Link: https://lore.kernel.org/r/20221107062255.2685-1-yang.lee@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/io_uring' into for-next
Jens Axboe [Sat, 5 Nov 2022 15:31:47 +0000 (09:31 -0600)]
Merge branch 'for-6.2/io_uring' into for-next

* for-6.2/io_uring:
  io_uring: fix two assignments in if conditions

2 years agoio_uring: fix two assignments in if conditions
Xinghui Li [Wed, 2 Nov 2022 08:25:03 +0000 (16:25 +0800)]
io_uring: fix two assignments in if conditions

Fixes two errors:

"ERROR: do not use assignment in if condition
130: FILE: io_uring/net.c:130:
+       if (!(issue_flags & IO_URING_F_UNLOCKED) &&

ERROR: do not use assignment in if condition
599: FILE: io_uring/poll.c:599:
+       } else if (!(issue_flags & IO_URING_F_UNLOCKED) &&"
reported by checkpatch.pl in net.c and poll.c .

Signed-off-by: Xinghui Li <korantli@tencent.com>
Reported-by: kernel test robot <lkp@intel.com>
Link: https://lore.kernel.org/r/20221102082503.32236-1-korantwork@gmail.com
[axboe: style tweaks]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/io_uring' into for-next
Jens Axboe [Sat, 5 Nov 2022 15:20:46 +0000 (09:20 -0600)]
Merge branch 'for-6.2/io_uring' into for-next

* for-6.2/io_uring:
  io_uring/net: move mm accounting to a slower path
  io_uring: move zc reporting from the hot path
  io_uring/net: inline io_notif_flush()
  io_uring/net: rename io_uring_tx_zerocopy_callback
  io_uring/net: preset notif tw handler
  io_uring/net: remove extra notif rsrc setup
  io_uring: move kbuf put out of generic tw complete

2 years agoio_uring/net: move mm accounting to a slower path
Pavel Begunkov [Fri, 4 Nov 2022 10:59:46 +0000 (10:59 +0000)]
io_uring/net: move mm accounting to a slower path

We can also move mm accounting to the extended callbacks. It removes a
few cycles from the hot path including skipping one function call and
setting io_req_task_complete as a callback directly. For user backed I/O
it shouldn't make any difference taking into considering atomic mm
accounting and page pinning.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1062f270273ad11c1b7b45ec59a6a317533d5e64.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: move zc reporting from the hot path
Pavel Begunkov [Fri, 4 Nov 2022 10:59:45 +0000 (10:59 +0000)]
io_uring: move zc reporting from the hot path

Add custom tw and notif callbacks on top of usual bits also handling zc
reporting. That moves it from the hot path.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/40de4a6409042478e1f35adc4912e23226cb1b5c.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring/net: inline io_notif_flush()
Pavel Begunkov [Fri, 4 Nov 2022 10:59:44 +0000 (10:59 +0000)]
io_uring/net: inline io_notif_flush()

io_notif_flush() is pretty simple, we can inline it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/332359e7bd124138dfe51340bbec829c9b265c18.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring/net: rename io_uring_tx_zerocopy_callback
Pavel Begunkov [Fri, 4 Nov 2022 10:59:43 +0000 (10:59 +0000)]
io_uring/net: rename io_uring_tx_zerocopy_callback

Just a simple renaming patch, io_uring_tx_zerocopy_callback() is too
bulky and doesn't follow usual naming style.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/24d78325403ca6dcb1ec4bced1e33cacc9b832a5.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring/net: preset notif tw handler
Pavel Begunkov [Fri, 4 Nov 2022 10:59:42 +0000 (10:59 +0000)]
io_uring/net: preset notif tw handler

We're going to have multiple notification tw functions. In preparation
for future changes default the tw callback in advance so later we can
replace it with other versions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/7acdbea5e20eadd844513320cd454af14ba50f64.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring/net: remove extra notif rsrc setup
Pavel Begunkov [Fri, 4 Nov 2022 10:59:41 +0000 (10:59 +0000)]
io_uring/net: remove extra notif rsrc setup

io_send_zc_prep() sets up notification's rsrc_node when needed, don't
unconditionally install it on notif alloc.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/dbe4875ac33e180b9799d8537a5e27935e82aac4.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoio_uring: move kbuf put out of generic tw complete
Pavel Begunkov [Fri, 4 Nov 2022 10:59:40 +0000 (10:59 +0000)]
io_uring: move kbuf put out of generic tw complete

There are multiple users of io_req_task_complete() including zc
notifications, but only read requests use selected buffers. As we
already have an rw specific tw function, move io_put_kbuf() in there.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/94374c7649aaefc3a17808dc4701f25ccd457e25.1667557923.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Wed, 2 Nov 2022 14:45:05 +0000 (08:45 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  blk-mq: use if-else instead of goto in blk_mq_alloc_cached_request()
  blk-mq: improve error handling in blk_mq_alloc_rq_map()

2 years agoblk-mq: use if-else instead of goto in blk_mq_alloc_cached_request()
Jinlong Chen [Wed, 2 Nov 2022 02:52:30 +0000 (10:52 +0800)]
blk-mq: use if-else instead of goto in blk_mq_alloc_cached_request()

if-else is more readable than goto here.

Signed-off-by: Jinlong Chen <nickyc975@zju.edu.cn>
Link: https://lore.kernel.org/r/d3306fa4e92dc9cc614edc8f1802686096bafef2.1667356813.git.nickyc975@zju.edu.cn
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: improve error handling in blk_mq_alloc_rq_map()
Jinlong Chen [Wed, 2 Nov 2022 02:52:29 +0000 (10:52 +0800)]
blk-mq: improve error handling in blk_mq_alloc_rq_map()

Use goto-style error handling like we do elsewhere in the kernel.

Signed-off-by: Jinlong Chen <nickyc975@zju.edu.cn>
Link: https://lore.kernel.org/r/bbbc2d9b17b137798c7fb92042141ca4cbbc58cc.1667356813.git.nickyc975@zju.edu.cn
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Wed, 2 Nov 2022 14:35:40 +0000 (08:35 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  nvme: use blk_mq_[un]quiesce_tagset
  blk-mq: add tagset quiesce interface
  blk-mq: pass a tagset to blk_mq_wait_quiesce_done
  blk-mq: move the srcu_struct used for quiescing to the tagset
  blk-mq: skip non-mq queues in blk_mq_quiesce_queue
  nvme-apple: don't unquiesce the I/O queues in apple_nvme_reset_work
  nvme-pci: don't unquiesce the I/O queues in nvme_remove_dead_ctrl
  nvme: split nvme_kill_queues
  nvme: don't unquiesce the admin queue in nvme_kill_queues
  nvme: remove the NVME_NS_DEAD check in nvme_validate_ns
  nvme: remove the NVME_NS_DEAD check in nvme_remove_invalid_namespaces
  nvme: don't remove namespaces in nvme_passthru_end
  nvme-pci: refactor the tagset handling in nvme_reset_work
  block: set the disk capacity to 0 in blk_mark_disk_dead

2 years agonvme: use blk_mq_[un]quiesce_tagset
Chao Leng [Tue, 1 Nov 2022 15:00:50 +0000 (16:00 +0100)]
nvme: use blk_mq_[un]quiesce_tagset

All controller namespaces share the same tagset, so we can use this
interface which does the optimal operation for parallel quiesce based on
the tagset type(e.g. blocking tagsets and non-blocking tagsets).

nvme connect_q should not be quiesced when quiesce tagset, so set the
QUEUE_FLAG_SKIP_TAGSET_QUIESCE to skip it when init connect_q.

Currently we use NVME_NS_STOPPED to ensure pairing quiescing and
unquiescing. If use blk_mq_[un]quiesce_tagset, NVME_NS_STOPPED will be
invalided, so introduce NVME_CTRL_STOPPED to replace NVME_NS_STOPPED.
In addition, we never really quiesce a single namespace. It is a better
choice to move the flag from ns to ctrl.

Signed-off-by: Chao Leng <lengchao@huawei.com>
[hch: rebased on top of prep patches]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-15-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: add tagset quiesce interface
Chao Leng [Tue, 1 Nov 2022 15:00:49 +0000 (16:00 +0100)]
blk-mq: add tagset quiesce interface

Drivers that have shared tagsets may need to quiesce potentially a lot
of request queues that all share a single tagset (e.g. nvme). Add an
interface to quiesce all the queues on a given tagset. This interface is
useful because it can speedup the quiesce by doing it in parallel.

Because some queues should not need to be quiesced (e.g. the nvme
connect_q) when quiescing the tagset, introduce a
QUEUE_FLAG_SKIP_TAGSET_QUIESCE flag to allow this new interface to
ski quiescing a particular queue.

Signed-off-by: Chao Leng <lengchao@huawei.com>
[hch: simplify for the per-tag_set srcu_struct]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-14-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: pass a tagset to blk_mq_wait_quiesce_done
Christoph Hellwig [Tue, 1 Nov 2022 15:00:48 +0000 (16:00 +0100)]
blk-mq: pass a tagset to blk_mq_wait_quiesce_done

Nothing in blk_mq_wait_quiesce_done needs the request_queue now, so just
pass the tagset, and move the non-mq check into the only caller that
needs it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-13-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: move the srcu_struct used for quiescing to the tagset
Christoph Hellwig [Tue, 1 Nov 2022 15:00:47 +0000 (16:00 +0100)]
blk-mq: move the srcu_struct used for quiescing to the tagset

All I/O submissions have fairly similar latencies, and a tagset-wide
quiesce is a fairly common operation.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-12-hch@lst.de
[axboe: fix whitespace]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: skip non-mq queues in blk_mq_quiesce_queue
Christoph Hellwig [Tue, 1 Nov 2022 15:00:46 +0000 (16:00 +0100)]
blk-mq: skip non-mq queues in blk_mq_quiesce_queue

For submit_bio based queues there is no (S)RCU critical section during
I/O submission and thus nothing to wait for in blk_mq_wait_quiesce_done,
so skip doing any synchronization.  No non-mq driver should be calling
this, but for now we have core callers that unconditionally call into it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-11-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme-apple: don't unquiesce the I/O queues in apple_nvme_reset_work
Christoph Hellwig [Tue, 1 Nov 2022 15:00:45 +0000 (16:00 +0100)]
nvme-apple: don't unquiesce the I/O queues in apple_nvme_reset_work

apple_nvme_reset_work schedules apple_nvme_remove, to be called, which
will call apple_nvme_disable and unquiesce the I/O queues.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-10-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme-pci: don't unquiesce the I/O queues in nvme_remove_dead_ctrl
Christoph Hellwig [Tue, 1 Nov 2022 15:00:44 +0000 (16:00 +0100)]
nvme-pci: don't unquiesce the I/O queues in nvme_remove_dead_ctrl

nvme_remove_dead_ctrl schedules nvme_remove to be called, which will
call nvme_dev_disable and unquiesce the I/O queues.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme: split nvme_kill_queues
Christoph Hellwig [Tue, 1 Nov 2022 15:00:43 +0000 (16:00 +0100)]
nvme: split nvme_kill_queues

nvme_kill_queues does two things:

 1) mark the gendisk of all namespaces dead
 2) unquiesce all I/O queues

These used to be be intertwined due to block layer issues, but aren't
any more.  So move the unquiscing of the I/O queues into the callers,
and rename the rest of the function to the now more descriptive
nvme_mark_namespaces_dead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme: don't unquiesce the admin queue in nvme_kill_queues
Christoph Hellwig [Tue, 1 Nov 2022 15:00:42 +0000 (16:00 +0100)]
nvme: don't unquiesce the admin queue in nvme_kill_queues

None of the callers of nvme_kill_queues needs it to unquiesce the
admin queues, as all of them already do it themselves:

 1) nvme_reset_work explicit call nvme_start_admin_queue toward the
    beginning of the function.  The extra call to nvme_start_admin_queue
    in nvme_reset_work this won't do anything as
    NVME_CTRL_ADMIN_Q_STOPPED will already be cleared.
 2) nvme_remove calls nvme_dev_disable with shutdown flag set to true at
    the very beginning of the function if the PCIe device was not present,
    which is the precondition for the call to nvme_kill_queues.
    nvme_dev_disable already calls nvme_start_admin_queue toward the
    end of the function when the shutdown flag is set to true, so the
    admin queue is already enabled at this point.
 3) nvme_remove_dead_ctrl schedules a workqueue to unbind the driver,
    which will end up in nvme_remove, which calls nvme_dev_disable with
    the shutdown flag.  This case will call nvme_start_admin_queue a bit
    later than before.
 4) apple_nvme_remove uses the same sequence as nvme_remove_dead_ctrl
    above.
 5) nvme_remove_namespaces only calls nvme_kill_queues when the
    controller is in the DEAD state.  That can only happen in the PCIe
    driver, and only from nvme_remove. See item 2) above for the
    conditions there.

So it is safe to just remove the call to nvme_start_admin_queue in
nvme_kill_queues without replacement.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-7-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme: remove the NVME_NS_DEAD check in nvme_validate_ns
Christoph Hellwig [Tue, 1 Nov 2022 15:00:41 +0000 (16:00 +0100)]
nvme: remove the NVME_NS_DEAD check in nvme_validate_ns

At the point where namespaces are marked dead, the controller is in a
non-live state and we won't get pass the identify commands.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme: remove the NVME_NS_DEAD check in nvme_remove_invalid_namespaces
Christoph Hellwig [Tue, 1 Nov 2022 15:00:40 +0000 (16:00 +0100)]
nvme: remove the NVME_NS_DEAD check in nvme_remove_invalid_namespaces

The NVME_NS_DEAD check only made sense when we revalidated namespaces
in nvme_passthrough_end for commands that affected the namespace inventory.
These days NVME_NS_DEAD is only set during reset or when tearing down
namespaces, and we always remove all namespaces right after that.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme: don't remove namespaces in nvme_passthru_end
Christoph Hellwig [Tue, 1 Nov 2022 15:00:39 +0000 (16:00 +0100)]
nvme: don't remove namespaces in nvme_passthru_end

The call to nvme_remove_invalid_namespaces made sense when
nvme_passthru_end revalidated all namespaces and had to remove those that
didn't exist any more.  Since we don't revalidate from nvme_passthru_end
now, this call is entirely spurious.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agonvme-pci: refactor the tagset handling in nvme_reset_work
Christoph Hellwig [Tue, 1 Nov 2022 15:00:38 +0000 (16:00 +0100)]
nvme-pci: refactor the tagset handling in nvme_reset_work

The code to create, update or delete a tagset and namespaces in
nvme_reset_work is a bit convoluted.  Refactor it with a two high-level
conditionals for first probe vs reset and I/O queues vs no I/O queues
to make the code flow more clear.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-3-hch@lst.de
[axboe: fix whitespace issue]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: set the disk capacity to 0 in blk_mark_disk_dead
Christoph Hellwig [Tue, 1 Nov 2022 15:00:37 +0000 (16:00 +0100)]
block: set the disk capacity to 0 in blk_mark_disk_dead

nvme and xen-blkfront are already doing this to stop buffered writes from
creating dirty pages that can't be written out later.  Move it to the
common code.

This also removes the comment about the ordering from nvme, as bd_mutex
not only is gone entirely, but also hasn't been used for locking updates
to the disk size long before that, and thus the ordering requirement
documented there doesn't apply any more.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Chao Leng <lengchao@huawei.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Link: https://lore.kernel.org/r/20221101150050.3510-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/io_uring' into for-next
Jens Axboe [Wed, 2 Nov 2022 14:01:50 +0000 (08:01 -0600)]
Merge branch 'for-6.2/io_uring' into for-next

* for-6.2/io_uring: (332 commits)
  io_uring/net: introduce IORING_SEND_ZC_REPORT_USAGE flag
  Linux 6.1-rc3
  platform/loongarch: laptop: Fix possible UAF and simplify generic_acpi_laptop_init()
  platform/loongarch: laptop: Adjust resume order for loongson_hotkey_resume()
  LoongArch: BPF: Avoid declare variables in switch-case
  LoongArch: Use flexible-array member instead of zero-length array
  LoongArch: Remove unused kernel stack padding
  random: use arch_get_random*_early() in random_init()
  mm: multi-gen LRU: move lru_gen_add_mm() out of IRQ-off region
  lib: maple_tree: remove unneeded initialization in mtree_range_walk()
  mmap: fix remap_file_pages() regression
  mm/shmem: ensure proper fallback if page faults
  mm/userfaultfd: replace kmap/kmap_atomic() with kmap_local_page()
  x86: fortify: kmsan: fix KMSAN fortify builds
  x86: asm: make sure __put_user_size() evaluates pointer once
  Kconfig.debug: disable CONFIG_FRAME_WARN for KMSAN by default
  x86/purgatory: disable KMSAN instrumentation
  mm: kmsan: export kmsan_copy_page_meta()
  mm: migrate: fix return value if all subpages of THPs are migrated successfully
  mm/uffd: fix vma check on userfault for wp
  ...

2 years agoio_uring/net: introduce IORING_SEND_ZC_REPORT_USAGE flag
Stefan Metzmacher [Thu, 27 Oct 2022 18:34:45 +0000 (20:34 +0200)]
io_uring/net: introduce IORING_SEND_ZC_REPORT_USAGE flag

It might be useful for applications to detect if a zero copy transfer with
SEND[MSG]_ZC was actually possible or not. The application can fallback to
plain SEND[MSG] in order to avoid the overhead of two cqes per request. Or
it can generate a log message that could indicate to an administrator that
no zero copy was possible and could explain degraded performance.

Cc: stable@vger.kernel.org # 6.1
Link: https://lore.kernel.org/io-uring/fb6a7599-8a9b-15e5-9b64-6cd9d01c6ff4@gmail.com/T/#m2b0d9df94ce43b0e69e6c089bdff0ce6babbdfaa
Signed-off-by: Stefan Metzmacher <metze@samba.org>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8945b01756d902f5d5b0667f20b957ad3f742e5e.1666895626.git.metze@samba.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Wed, 2 Nov 2022 02:10:59 +0000 (20:10 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  block, bfq: don't declare 'bfqd' as type 'void *' in bfq_group
  block, bfq: remove dead code for updating 'rq_in_driver'
  block, bfq: cleanup bfq_activate_requeue_entity()
  block, bfq: factor out code to update 'active_entities'
  block, bfq: remove set but not used variable in __bfq_entity_update_weight_prio

2 years agoblock, bfq: don't declare 'bfqd' as type 'void *' in bfq_group
Yu Kuai [Wed, 2 Nov 2022 02:25:42 +0000 (10:25 +0800)]
block, bfq: don't declare 'bfqd' as type 'void *' in bfq_group

Prevent unnecessary format conversion for bfqg->bfqd in multiple
places.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@unimore.it>
Link: https://lore.kernel.org/r/20221102022542.3621219-6-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: remove dead code for updating 'rq_in_driver'
Yu Kuai [Wed, 2 Nov 2022 02:25:41 +0000 (10:25 +0800)]
block, bfq: remove dead code for updating 'rq_in_driver'

Such code are not even compiled since they are inside marco "#if 0".

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@unimore.it>
Link: https://lore.kernel.org/r/20221102022542.3621219-5-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: cleanup bfq_activate_requeue_entity()
Yu Kuai [Wed, 2 Nov 2022 02:25:40 +0000 (10:25 +0800)]
block, bfq: cleanup bfq_activate_requeue_entity()

Just make the code a litter cleaner by removing the unnecessary
variable 'sd'.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@unimore.it>
Link: https://lore.kernel.org/r/20221102022542.3621219-4-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: factor out code to update 'active_entities'
Yu Kuai [Wed, 2 Nov 2022 02:25:39 +0000 (10:25 +0800)]
block, bfq: factor out code to update 'active_entities'

Current code is a bit ugly and hard to read.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@unimore.it>
Link: https://lore.kernel.org/r/20221102022542.3621219-3-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: remove set but not used variable in __bfq_entity_update_weight_prio
Yu Kuai [Wed, 2 Nov 2022 02:25:38 +0000 (10:25 +0800)]
block, bfq: remove set but not used variable in __bfq_entity_update_weight_prio

After the patch "block, bfq: cleanup bfq_weights_tree add/remove apis"),
the local variable 'bfqd' is not used anymore, thus remove it.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Link: https://lore.kernel.org/r/20221102022542.3621219-2-yukuai1@huaweicloud.com
Fixes: afdba1461262 ("block, bfq: cleanup bfq_weights_tree add/remove apis")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Tue, 1 Nov 2022 15:12:42 +0000 (09:12 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  block: Replace struct rq_depth with unsigned int in struct iolatency_grp
  block: Correct comment for scale_cookie_change
  block: Remove redundant parent blkcg_gp check in check_scale_change
  block: split elevator_switch
  block: don't check for required features in elevator_match
  block: simplify the check for the current elevator in elv_iosched_show
  block: cleanup the variable naming in elv_iosched_store
  block: exit elv_iosched_show early when I/O schedulers are not supported
  block: cleanup elevator_get

2 years agoblock: Replace struct rq_depth with unsigned int in struct iolatency_grp
Kemeng Shi [Tue, 18 Oct 2022 11:12:40 +0000 (19:12 +0800)]
block: Replace struct rq_depth with unsigned int in struct iolatency_grp

We only need a max queue depth for every iolatency to limit the inflight io
number. Replace struct rq_depth with unsigned int to simplfy "struct
iolatency_grp" and save memory.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Link: https://lore.kernel.org/r/20221018111240.22612-4-shikemeng@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: Correct comment for scale_cookie_change
Kemeng Shi [Tue, 18 Oct 2022 11:12:39 +0000 (19:12 +0800)]
block: Correct comment for scale_cookie_change

Default queue depth of iolatency_grp is unlimited, so we scale down
quickly(once by half) in scale_cookie_change. Remove the "subtract
1/16th" part which is not the truth and add the actual way we
scale down.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Link: https://lore.kernel.org/r/20221018111240.22612-3-shikemeng@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: Remove redundant parent blkcg_gp check in check_scale_change
Kemeng Shi [Tue, 18 Oct 2022 11:12:38 +0000 (19:12 +0800)]
block: Remove redundant parent blkcg_gp check in check_scale_change

Function blkcg_iolatency_throttle will make sure blkg->parent is not
NULL before calls check_scale_change. And function check_scale_change
is only called in blkcg_iolatency_throttle.

Signed-off-by: Kemeng Shi <shikemeng@huawei.com>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Link: https://lore.kernel.org/r/20221018111240.22612-2-shikemeng@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: split elevator_switch
Christoph Hellwig [Sun, 30 Oct 2022 10:07:14 +0000 (11:07 +0100)]
block: split elevator_switch

Split an elevator_disable helper from elevator_switch for the case where
we want to switch to no scheduler at all.  This includes removing the
pointless elevator_switch_mq helper and removing the switch to no
schedule logic from blk_mq_init_sched.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: don't check for required features in elevator_match
Christoph Hellwig [Sun, 30 Oct 2022 10:07:13 +0000 (11:07 +0100)]
block: don't check for required features in elevator_match

Checking for the required features in the callers simplifies the code
quite a bit, so do that.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-7-hch@lst.de
[axboe: adjust for dropping patch 1, use __elevator_find()]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: simplify the check for the current elevator in elv_iosched_show
Christoph Hellwig [Sun, 30 Oct 2022 10:07:12 +0000 (11:07 +0100)]
block: simplify the check for the current elevator in elv_iosched_show

Just compare the pointers instead of using the string based
elevator_match.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: cleanup the variable naming in elv_iosched_store
Christoph Hellwig [Sun, 30 Oct 2022 10:07:11 +0000 (11:07 +0100)]
block: cleanup the variable naming in elv_iosched_store

Use eq for the elevator_queue as done elsewhere.  This frees e to be used
for the loop iterator instead of the odd __ prefix.  In addition rename
elv to cur to make it more clear it is the currently selected elevator.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: exit elv_iosched_show early when I/O schedulers are not supported
Christoph Hellwig [Sun, 30 Oct 2022 10:07:10 +0000 (11:07 +0100)]
block: exit elv_iosched_show early when I/O schedulers are not supported

If the tag_set has BLK_MQ_F_NO_SCHED flag set we will never show any
scheduler, so exit early.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock: cleanup elevator_get
Christoph Hellwig [Sun, 30 Oct 2022 10:07:09 +0000 (11:07 +0100)]
block: cleanup elevator_get

Do the request_module and repeated lookup in the only caller that cares,
pick a saner name that explains where are actually doing a lookup and
use a sane calling conventions that passes the queue first.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030100714.876891-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Tue, 1 Nov 2022 13:09:52 +0000 (07:09 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  block, bfq: cleanup __bfq_weights_tree_remove()
  block, bfq: cleanup bfq_weights_tree add/remove apis
  block, bfq: do not idle if only one group is activated
  block, bfq: refactor the counting of 'num_groups_with_pending_reqs'
  block, bfq: record how many queues have pending requests
  block, bfq: support to track if bfqq has pending requests

2 years agoblock, bfq: cleanup __bfq_weights_tree_remove()
Yu Kuai [Fri, 16 Sep 2022 07:19:42 +0000 (15:19 +0800)]
block, bfq: cleanup __bfq_weights_tree_remove()

It's the same with bfq_weights_tree_remove() now.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-7-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: cleanup bfq_weights_tree add/remove apis
Yu Kuai [Fri, 16 Sep 2022 07:19:41 +0000 (15:19 +0800)]
block, bfq: cleanup bfq_weights_tree add/remove apis

The 'bfq_data' and 'rb_root_cached' can both be accessed through
'bfq_queue', thus only pass 'bfq_queue' as parameter.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-6-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: do not idle if only one group is activated
Yu Kuai [Fri, 16 Sep 2022 07:19:40 +0000 (15:19 +0800)]
block, bfq: do not idle if only one group is activated

Now that root group is counted into 'num_groups_with_pending_reqs',
'num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario(). Thus change the condition to '> 1'.

On the other hand, this change can enable concurrent sync io if only
one group is activated.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-5-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: refactor the counting of 'num_groups_with_pending_reqs'
Yu Kuai [Fri, 16 Sep 2022 07:19:39 +0000 (15:19 +0800)]
block, bfq: refactor the counting of 'num_groups_with_pending_reqs'

Currently, bfq can't handle sync io concurrently as long as they
are not issued from root group. This is because
'bfqd->num_groups_with_pending_reqs > 0' is always true in
bfq_asymmetric_scenario().

The way that bfqg is counted into 'num_groups_with_pending_reqs':

Before this patch:
 1) root group will never be counted.
 2) Count if bfqg or it's child bfqgs have pending requests.
 3) Don't count if bfqg and it's child bfqgs complete all the requests.

After this patch:
 1) root group is counted.
 2) Count if bfqg have pending requests.
 3) Don't count if bfqg complete all the requests.

With this change, the occasion that only one group is activated can be
detected, and next patch will support concurrent sync io in the
occasion.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-4-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: record how many queues have pending requests
Yu Kuai [Fri, 16 Sep 2022 07:19:38 +0000 (15:19 +0800)]
block, bfq: record how many queues have pending requests

Prepare to refactor the counting of 'num_groups_with_pending_reqs'.

Add a counter in bfq_group, update it while tracking if bfqq have pending
requests and when bfq_bfqq_move() is called.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-3-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblock, bfq: support to track if bfqq has pending requests
Yu Kuai [Fri, 16 Sep 2022 07:19:37 +0000 (15:19 +0800)]
block, bfq: support to track if bfqq has pending requests

If entity belongs to bfqq, then entity->in_groups_with_pending_reqs
is not used currently. This patch use it to track if bfqq has pending
requests through callers of weights_tree insertion and removal.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Link: https://lore.kernel.org/r/20220916071942.214222-2-yukuai1@huaweicloud.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Mon, 31 Oct 2022 13:32:00 +0000 (07:32 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  blk-mq: remove redundant call to blk_freeze_queue_start in blk_mq_destroy_queue
  blk-mq: move queue_is_mq out of blk_mq_cancel_work_sync

2 years agoblk-mq: remove redundant call to blk_freeze_queue_start in blk_mq_destroy_queue
Jinlong Chen [Sun, 30 Oct 2022 08:32:12 +0000 (16:32 +0800)]
blk-mq: remove redundant call to blk_freeze_queue_start in blk_mq_destroy_queue

The calling relationship in blk_mq_destroy_queue() is as follows:

blk_mq_destroy_queue()
    ...
    -> blk_queue_start_drain()
        -> blk_freeze_queue_start()  <- called
        ...
    -> blk_freeze_queue()
        -> blk_freeze_queue_start()  <- called again
        -> blk_mq_freeze_queue_wait()
    ...

So there is a redundant call to blk_freeze_queue_start().

Replace blk_freeze_queue() with blk_mq_freeze_queue_wait() to avoid the
redundant call.

Signed-off-by: Jinlong Chen <nickyc975@zju.edu.cn>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030083212.1251255-1-nickyc975@zju.edu.cn
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoblk-mq: move queue_is_mq out of blk_mq_cancel_work_sync
Jinlong Chen [Sun, 30 Oct 2022 09:47:30 +0000 (17:47 +0800)]
blk-mq: move queue_is_mq out of blk_mq_cancel_work_sync

The only caller that needs queue_is_mq check is del_gendisk, so move the
check into it.

Signed-off-by: Jinlong Chen <nickyc975@zju.edu.cn>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20221030094730.1275463-1-nickyc975@zju.edu.cn
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Mon, 31 Oct 2022 13:26:54 +0000 (07:26 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  block: simplify blksize_bits() implementation

2 years agoblock: simplify blksize_bits() implementation
Dawei Li [Sun, 30 Oct 2022 05:20:08 +0000 (13:20 +0800)]
block: simplify blksize_bits() implementation

Convert current looping-based implementation into bit operation,
which can bring improvement for:

1) bitops is more efficient for its arch-level optimization.

2) Given that blksize_bits() is inline, _if_ @size is compile-time
constant, it's possible that order_base_2() _may_ make output
compile-time evaluated, depending on code context and compiler behavior.

Signed-off-by: Dawei Li <set_pte_at@outlook.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/TYCP286MB23238842958D7C083D6B67CECA349@TYCP286MB2323.JPNP286.PROD.OUTLOOK.COM
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoMerge branch 'for-6.2/block' into for-next
Jens Axboe [Mon, 31 Oct 2022 13:24:46 +0000 (07:24 -0600)]
Merge branch 'for-6.2/block' into for-next

* for-6.2/block:
  blk-mq: avoid double ->queue_rq() because of early timeout

2 years agoblk-mq: avoid double ->queue_rq() because of early timeout
David Jeffery [Wed, 26 Oct 2022 05:19:57 +0000 (13:19 +0800)]
blk-mq: avoid double ->queue_rq() because of early timeout

David Jeffery found one double ->queue_rq() issue, so far it can
be triggered in VM use case because of long vmexit latency or preempt
latency of vCPU pthread or long page fault in vCPU pthread, then block
IO req could be timed out before queuing the request to hardware but after
calling blk_mq_start_request() during ->queue_rq(), then timeout handler
may handle it by requeue, then double ->queue_rq() is caused, and kernel
panic.

So far, it is driver's responsibility to cover the race between timeout
and completion, so it seems supposed to be solved in driver in theory,
given driver has enough knowledge.

But it is really one common problem, lots of driver could have similar
issue, and could be hard to fix all affected drivers, even it isn't easy
for driver to handle the race. So David suggests this patch by draining
in-progress ->queue_rq() for solving this issue.

Cc: Stefan Hajnoczi <stefanha@redhat.com>
Cc: Keith Busch <kbusch@kernel.org>
Cc: virtualization@lists.linux-foundation.org
Cc: Bart Van Assche <bvanassche@acm.org>
Signed-off-by: David Jeffery <djeffery@redhat.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Link: https://lore.kernel.org/r/20221026051957.358818-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
2 years agoLinux 6.1-rc3
Linus Torvalds [Sun, 30 Oct 2022 22:19:28 +0000 (15:19 -0700)]
Linux 6.1-rc3

2 years agoMerge tag 'fbdev-for-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller...
Linus Torvalds [Sun, 30 Oct 2022 18:31:14 +0000 (11:31 -0700)]
Merge tag 'fbdev-for-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/linux-fbdev

Pull fbdev fixes from Helge Deller:
 "A use-after-free bugfix in the smscufx driver and various minor error
  path fixes, smaller build fixes, sysfs fixes and typos in comments in
  the stifb, sisfb, da8xxfb, xilinxfb, sm501fb, gbefb and cyber2000fb
  drivers"

* tag 'fbdev-for-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/linux-fbdev:
  fbdev: cyber2000fb: fix missing pci_disable_device()
  fbdev: sisfb: use explicitly signed char
  fbdev: smscufx: Fix several use-after-free bugs
  fbdev: xilinxfb: Make xilinxfb_release() return void
  fbdev: sisfb: fix repeated word in comment
  fbdev: gbefb: Convert sysfs snprintf to sysfs_emit
  fbdev: sm501fb: Convert sysfs snprintf to sysfs_emit
  fbdev: stifb: Fall back to cfb_fillrect() on 32-bit HCRX cards
  fbdev: da8xx-fb: Fix error handling in .remove()
  fbdev: MIPS supports iomem addresses

2 years agoMerge tag 'char-misc-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh...
Linus Torvalds [Sun, 30 Oct 2022 18:22:33 +0000 (11:22 -0700)]
Merge tag 'char-misc-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc fixes from Greg KH:
 "Some small driver fixes for 6.1-rc3.  They include:

   - iio driver bugfixes

   - counter driver bugfixes

   - coresight bugfixes, including a revert and then a second fix to get
     it right.

  All of these have been in linux-next with no reported problems"

* tag 'char-misc-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (21 commits)
  misc: sgi-gru: use explicitly signed char
  coresight: cti: Fix hang in cti_disable_hw()
  Revert "coresight: cti: Fix hang in cti_disable_hw()"
  counter: 104-quad-8: Fix race getting function mode and direction
  counter: microchip-tcb-capture: Handle Signal1 read and Synapse
  coresight: cti: Fix hang in cti_disable_hw()
  coresight: Fix possible deadlock with lock dependency
  counter: ti-ecap-capture: fix IS_ERR() vs NULL check
  counter: Reduce DEFINE_COUNTER_ARRAY_POLARITY() to defining counter_array
  iio: bmc150-accel-core: Fix unsafe buffer attributes
  iio: adxl367: Fix unsafe buffer attributes
  iio: adxl372: Fix unsafe buffer attributes
  iio: at91-sama5d2_adc: Fix unsafe buffer attributes
  iio: temperature: ltc2983: allocate iio channels once
  tools: iio: iio_utils: fix digit calculation
  iio: adc: stm32-adc: fix channel sampling time init
  iio: adc: mcp3911: mask out device ID in debug prints
  iio: adc: mcp3911: use correct id bits
  iio: adc: mcp3911: return proper error code on failure to allocate trigger
  iio: adc: mcp3911: fix sizeof() vs ARRAY_SIZE() bug
  ...

2 years agoMerge tag 'usb-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb
Linus Torvalds [Sun, 30 Oct 2022 17:35:07 +0000 (10:35 -0700)]
Merge tag 'usb-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB fixes from Greg KH:
 "A few small USB fixes for 6.1-rc3. Include in here are:

   - MAINTAINERS update, including a big one for the USB gadget
     subsystem. Many thanks to Felipe for all of the years of hard work
     he has done on this codebase, it was greatly appreciated.

   - dwc3 driver fixes for reported problems.

   - xhci driver fixes for reported problems.

   - typec driver fixes for minor issues

   - uvc gadget driver change, and then revert as it wasn't relevant for
     6.1-final, as it is a new feature and people are still reviewing
     and modifying it.

  All of these have been in the linux-next tree with no reported issues"

* tag 'usb-6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb:
  usb: dwc3: gadget: Don't set IMI for no_interrupt
  usb: dwc3: gadget: Stop processing more requests on IMI
  Revert "usb: gadget: uvc: limit isoc_sg to super speed gadgets"
  xhci: Remove device endpoints from bandwidth list when freeing the device
  xhci-pci: Set runtime PM as default policy on all xHC 1.2 or later devices
  xhci: Add quirk to reset host back to default state at shutdown
  usb: xhci: add XHCI_SPURIOUS_SUCCESS to ASM1042 despite being a V0.96 controller
  usb: dwc3: st: Rely on child's compatible instead of name
  usb: gadget: uvc: limit isoc_sg to super speed gadgets
  usb: bdc: change state when port disconnected
  usb: typec: ucsi: acpi: Implement resume callback
  usb: typec: ucsi: Check the connection on resume
  usb: gadget: aspeed: Fix probe regression
  usb: gadget: uvc: fix sg handling during video encode
  usb: gadget: uvc: fix sg handling in error case
  usb: gadget: uvc: fix dropped frame after missed isoc
  usb: dwc3: gadget: Don't delay End Transfer on delayed_status
  usb: dwc3: Don't switch OTG -> peripheral if extcon is present
  MAINTAINERS: Update maintainers for broadcom USB
  MAINTAINERS: move USB gadget and phy entries under the main USB entry

2 years agoMerge tag 'gpio-fixes-for-v6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Oct 2022 17:21:42 +0000 (10:21 -0700)]
Merge tag 'gpio-fixes-for-v6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux

Pull gpio fixes from Bartosz Golaszewski:

 - convert gpio-tegra to using an immutable irqchip

 - MAINTAINERS update

* tag 'gpio-fixes-for-v6.1-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/brgl/linux:
  MAINTAINERS: Change myself to a maintainer
  gpio: tegra: Convert to immutable irq chip

2 years agoMerge tag 'perf_urgent_for_v6.1_rc3' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Oct 2022 16:49:18 +0000 (09:49 -0700)]
Merge tag 'perf_urgent_for_v6.1_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull perf fixes from Borislav Petkov:

 - Rename a perf memory level event define to denote it is of CXL type

 - Add Alder and Raptor Lakes support to RAPL

 - Make sure raw sample data is output with tracepoints

* tag 'perf_urgent_for_v6.1_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  perf/mem: Rename PERF_MEM_LVLNUM_EXTN_MEM to PERF_MEM_LVLNUM_CXL
  perf/x86/rapl: Add support for Intel Raptor Lake
  perf/x86/rapl: Add support for Intel AlderLake-N
  perf: Fix missing raw data on tracepoint events

2 years agoMerge tag 'loongarch-fixes-6.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 30 Oct 2022 16:44:06 +0000 (09:44 -0700)]
Merge tag 'loongarch-fixes-6.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson

Pull LoongArch fixes from Huacai Chen:
 "Remove unused kernel stack padding, fix some build errors/warnings and
  two bugs in laptop platform driver"

* tag 'loongarch-fixes-6.1-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
  platform/loongarch: laptop: Fix possible UAF and simplify generic_acpi_laptop_init()
  platform/loongarch: laptop: Adjust resume order for loongson_hotkey_resume()
  LoongArch: BPF: Avoid declare variables in switch-case
  LoongArch: Use flexible-array member instead of zero-length array
  LoongArch: Remove unused kernel stack padding

2 years agoMerge tag '6.1-rc2-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6
Linus Torvalds [Sun, 30 Oct 2022 16:40:04 +0000 (09:40 -0700)]
Merge tag '6.1-rc2-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6

Pull cifs fixes from Steve French:

 - use after free fix for reconnect race

 - two memory leak fixes

* tag '6.1-rc2-smb3-fixes' of git://git.samba.org/sfrench/cifs-2.6:
  cifs: fix use-after-free caused by invalid pointer `hostname`
  cifs: Fix pages leak when writedata alloc failed in cifs_write_from_iter()
  cifs: Fix pages array leak when writedata alloc failed in cifs_writedata_alloc()

2 years agoMerge tag 'random-6.1-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 30 Oct 2022 01:33:03 +0000 (18:33 -0700)]
Merge tag 'random-6.1-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random

Pull random number generator fix from Jason Donenfeld:
 "One fix from Jean-Philippe Brucker, addressing a regression in which
  early boot code on ARM64 would use the non-_early variant of the
  arch_get_random family of functions, resulting in the architectural
  random number generator appearing unavailable during that early phase
  of boot.

  The fix simply changes arch_get_random*() to arch_get_random*_early().

  This distinction between these two functions is a bit of an old wart
  I'm not a fan of, and for 6.2 I'll see if I can make obsolete the
  _early variant, so that one function does the right thing in all
  contexts without overhead"

* tag 'random-6.1-rc3-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/crng/random:
  random: use arch_get_random*_early() in random_init()

2 years agoMerge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Linus Torvalds [Sun, 30 Oct 2022 01:12:45 +0000 (18:12 -0700)]
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi

Pull SCSI fixes from James Bottomley:
 "Varions small  fixes, all  in drivers.

  Some of these arrived during the merge window and got held over to
  make sure of testing on the -rc tree.

  The biggest change is for standards conformance in the target driver,
  closely followed by a set of bug fixes in megaraid_sas"

* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (21 commits)
  scsi: ufs: core: Fix typo in comment
  scsi: mpi3mr: Select CONFIG_SCSI_SAS_ATTRS
  scsi: ufs: core: Fix typo for register name in comments
  scsi: pm80xx: Display proc_name in sysfs
  scsi: ufs: core: Fix the error log in ufshcd_query_flag_retry()
  scsi: ufs: core: Remove unneeded casts from void *
  scsi: lpfc: Fix spelling mistake "unsolicted" -> "unsolicited"
  scsi: qla2xxx: Use transport-defined speed mask for supported_speeds
  scsi: target: iblock: Fold iblock_emulate_read_cap_with_block_size() into iblock_get_blocks()
  scsi: qla2xxx: Fix serialization of DCBX TLV data request
  scsi: ufs: qcom: Remove redundant dev_err() call
  scsi: megaraid_sas: Move megasas_dbg_lvl init to megasas_init()
  scsi: megaraid_sas: Remove unnecessary memset()
  scsi: megaraid_sas: Simplify megasas_update_device_list
  scsi: megaraid_sas: Correct an error message
  scsi: megaraid_sas: Correct value passed to scsi_device_lookup()
  scsi: target: core: UA on all LUNs after reset
  scsi: target: core: New key must be used for moved PR
  scsi: target: core: Abort all preempted regs if requested
  scsi: target: core: Fix memory leak in preempt_and_abort
  ...

2 years agoMerge tag 'block-6.1-2022-10-28' of git://git.kernel.dk/linux
Linus Torvalds [Sun, 30 Oct 2022 01:06:52 +0000 (18:06 -0700)]
Merge tag 'block-6.1-2022-10-28' of git://git.kernel.dk/linux

Pull block fixes from Jens Axboe:

 - NVMe pull request via Christoph:
      - make the multipath dma alignment match the non-multipath one
        (Keith Busch)
      - fix a bogus use of sg_init_marker() (Nam Cao)
      - fix circulr locking in nvme-tcp (Sagi Grimberg)

 - Initialization fix for requests allocated via the special hw queue
   allocator (John)

 - Fix for a regression added in this release with the batched
   completions of end_io backed requests (Ming)

 - Error handling leak fix for rbd (Yang)

 - Error handling leak fix for add_disk() failure (Yu)

* tag 'block-6.1-2022-10-28' of git://git.kernel.dk/linux:
  blk-mq: Properly init requests from blk_mq_alloc_request_hctx()
  blk-mq: don't add non-pt request with ->end_io to batch
  rbd: fix possible memory leak in rbd_sysfs_init()
  nvme-multipath: set queue dma alignment to 3
  nvme-tcp: fix possible circular locking when deleting a controller under memory pressure
  nvme-tcp: replace sg_init_marker() with sg_init_table()
  block: fix memory leak for elevator on add_disk failure