]> www.infradead.org Git - users/hch/block.git/log
users/hch/block.git
3 years agomd: properly unwind when failing to add the kobject in md_alloc md-fix
Christoph Hellwig [Tue, 31 Aug 2021 07:33:56 +0000 (09:33 +0200)]
md: properly unwind when failing to add the kobject in md_alloc

Add proper error handling to delete the gendisk when failing to add
the md kobject and clean up the error unwinding in general.

Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agomd: extend disks_mutex coverage
Christoph Hellwig [Tue, 31 Aug 2021 07:32:21 +0000 (09:32 +0200)]
md: extend disks_mutex coverage

disks_mutex is intended to serialize md_alloc.  Extended it to also cover
the kobject_uevent call and getting the sysfs dirent to help reducing
error handling complexity.

Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agomd: add the bitmap group to the default groups for the md kobject
Christoph Hellwig [Mon, 30 Aug 2021 10:21:23 +0000 (12:21 +0200)]
md: add the bitmap group to the default groups for the md kobject

Replace the deprecated default_attrs with the default_groups mechanism,
and add the always visible bitmap group to the groups created add
kobject_add time.

Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agomd: add error handling support for add_disk()
Luis Chamberlain [Tue, 31 Aug 2021 07:19:34 +0000 (09:19 +0200)]
md: add error handling support for add_disk()

We never checked for errors on add_disk() as this function
returned void. Now that this is fixed, use the shiny new
error handling.

We just do the unwinding of what was not done before, and are
sure to unlock prior to bailing.

Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agomd: fix a lock order reversal
Christoph Hellwig [Mon, 30 Aug 2021 10:27:50 +0000 (12:27 +0200)]
md: fix a lock order reversal

Commit b0140891a8cea3 ("md: Fix race when creating a new md device.")
not only moved assigning mddev->gendisk before calling add_disk, which
fixes the races described in the commit log, but also added a
mddev->open_mutex critical section over add_disk and creation of the
md kobj.  Adding a kobject after add_disk is racy vs deleting the gendisk
right after adding it, but md already prevents against that by holding
a mddev->active reference.

On the other hand taking this lock added a lock order reversal with what
is not disk->open_mutex (used to be bdev->bd_mutex when the commit was
added) for partition devices, which need that lock for the internal open
for the partition scan, and a recent commit also takes it for
non-partitioned devices, leading to further lockdep splatter.

Fixes: b0140891a8ce ("md: Fix race when creating a new md device.")
Fixes: d62633873590 ("block: support delayed holder registration")
Reported-by: syzbot+fadc0aaf497e6a493b9f@syzkaller.appspotmail.com
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: syzbot+fadc0aaf497e6a493b9f@syzkaller.appspotmail.com
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Tue, 31 Aug 2021 01:39:54 +0000 (19:39 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: IORING_OP_WRITE needs hash_reg_file set
  io-wq: split bounded and unbounded work into separate lists

3 years agoio_uring: IORING_OP_WRITE needs hash_reg_file set
Jens Axboe [Tue, 31 Aug 2021 01:37:41 +0000 (19:37 -0600)]
io_uring: IORING_OP_WRITE needs hash_reg_file set

During some testing, it became evident that using IORING_OP_WRITE doesn't
hash buffered writes like the other writes commands do. That's simply
an oversight, and can cause performance regressions when doing buffered
writes with this command.

Correct that and add the flag, so that buffered writes are correctly
hashed when using the non-iovec based write command.

Cc: stable@vger.kernel.org
Fixes: 3a6820f2bb8a ("io_uring: add non-vectored read/write commands")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: split bounded and unbounded work into separate lists
Jens Axboe [Mon, 30 Aug 2021 20:17:12 +0000 (14:17 -0600)]
io-wq: split bounded and unbounded work into separate lists

We've got a few issues that all boil down to the fact that we have one
list of pending work items, yet two different types of workers to
serve them. This causes some oddities around workers switching type and
even hashed work vs regular work on the same bounded list.

Just separate them out cleanly, similarly to how we already do
accounting of what is running. That provides a clean separation and
removes some corner cases that can cause stalls when handling IO
that is punted to io-wq.

Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker transitions bound state")
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Mon, 30 Aug 2021 17:58:05 +0000 (11:58 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io-wq: fix race between adding work and activating a free worker

3 years agoio-wq: fix race between adding work and activating a free worker
Jens Axboe [Mon, 30 Aug 2021 17:55:22 +0000 (11:55 -0600)]
io-wq: fix race between adding work and activating a free worker

The attempt to find and activate a free worker for new work is currently
combined with creating a new one if we don't find one, but that opens
io-wq up to a race where the worker that is found and activated can
put itself to sleep without knowing that it has been selected to perform
this new work.

Fix this by moving the activation into where we add the new work item,
then we can retain it within the wqe->lock scope and elimiate the race
with the worker itself checking inside the lock, but sleeping outside of
it.

Cc: stable@vger.kernel.org
Reported-by: Andres Freund <andres@anarazel.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Mon, 30 Aug 2021 13:51:25 +0000 (07:51 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io-wq: fix wakeup race when adding new work
  io-wq: wqe and worker locks no longer need to be IRQ safe

3 years agoio-wq: fix wakeup race when adding new work
Jens Axboe [Mon, 30 Aug 2021 13:45:47 +0000 (07:45 -0600)]
io-wq: fix wakeup race when adding new work

When new work is added, io_wqe_enqueue() checks if we need to wake or
create a new worker. But that check is done outside the lock that
otherwise synchronizes us with a worker going to sleep, so we can end
up in the following situation:

CPU0 CPU1
lock
insert work
unlock
atomic_read(nr_running) != 0
lock
atomic_dec(nr_running)
no wakeup needed

Hold the wqe lock around the "need to wakeup" check. Then we can also get
rid of the temporary work_flags variable, as we know the work will remain
valid as long as we hold the lock.

Cc: stable@vger.kernel.org
Reported-by: Andres Freund <andres@anarazel.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: wqe and worker locks no longer need to be IRQ safe
Jens Axboe [Mon, 30 Aug 2021 12:33:08 +0000 (06:33 -0600)]
io-wq: wqe and worker locks no longer need to be IRQ safe

io_uring no longer queues async work off completion handlers that run in
hard or soft interrupt context, and that use case was the only reason that
io-wq had to use IRQ safe locks for wqe and worker locks.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Mon, 30 Aug 2021 13:31:38 +0000 (07:31 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io-wq: check max_worker limits if a worker transitions bound state

3 years agoio-wq: check max_worker limits if a worker transitions bound state
Jens Axboe [Sun, 29 Aug 2021 22:13:03 +0000 (16:13 -0600)]
io-wq: check max_worker limits if a worker transitions bound state

For the two places where new workers are created, we diligently check if
we are allowed to create a new worker. If we're currently at the limit
of how many workers of a given type we can have, then we don't create
any new ones.

If you have a mixed workload with various types of bound and unbounded
work, then it can happen that a worker finishes one type of work and
is then transitioned to the other type. For this case, we don't check
if we are actually allowed to do so. This can cause io-wq to temporarily
exceed the allowed number of workers for a given type.

When retrieving work, check that the types match. If they don't, check
if we are allowed to transition to the other type. If not, then don't
handle the new work.

Cc: stable@vger.kernel.org
Reported-by: Johannes Lundberg <johalun0@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/drivers' into for-next
Jens Axboe [Sun, 29 Aug 2021 22:12:49 +0000 (16:12 -0600)]
Merge branch 'for-5.15/drivers' into for-next

* for-5.15/drivers:
  Revert "floppy: reintroduce O_NDELAY fix"
  raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
  md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard

3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Sun, 29 Aug 2021 22:12:47 +0000 (16:12 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: allow updating linked timeouts
  io_uring: keep ltimeouts in a list
  io_uring: support CLOCK_BOOTTIME/REALTIME for timeouts
  io-wq: provide a way to limit max number of workers

3 years agoio_uring: allow updating linked timeouts
Pavel Begunkov [Sun, 29 Aug 2021 01:54:39 +0000 (19:54 -0600)]
io_uring: allow updating linked timeouts

We allow updating normal timeouts, add support for adjusting timings of
linked timeouts as well.

Reported-by: Victor Stewart <v@nametag.social>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: keep ltimeouts in a list
Pavel Begunkov [Sun, 29 Aug 2021 01:54:38 +0000 (19:54 -0600)]
io_uring: keep ltimeouts in a list

A preparation patch. Keep all queued linked timeout in a list, so they
may be found and updated.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: support CLOCK_BOOTTIME/REALTIME for timeouts
Jens Axboe [Fri, 27 Aug 2021 23:11:06 +0000 (17:11 -0600)]
io_uring: support CLOCK_BOOTTIME/REALTIME for timeouts

Certain use cases want to use CLOCK_BOOTTIME or CLOCK_REALTIME rather than
CLOCK_MONOTONIC, instead of the default CLOCK_MONOTONIC.

Add an IORING_TIMEOUT_BOOTTIME and IORING_TIMEOUT_REALTIME flag that
allows timeouts and linked timeouts to use the selected clock source.

Only one clock source may be selected, and we -EINVAL the request if more
than one is given. If neither BOOTIME nor REALTIME are selected, the
previous default of MONOTONIC is used.

Link: https://github.com/axboe/liburing/issues/369
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: provide a way to limit max number of workers
Jens Axboe [Fri, 27 Aug 2021 17:33:19 +0000 (11:33 -0600)]
io-wq: provide a way to limit max number of workers

io-wq divides work into two categories:

1) Work that completes in a bounded time, like reading from a regular file
   or a block device. This type of work is limited based on the size of
   the SQ ring.

2) Work that may never complete, we call this unbounded work. The amount
   of workers here is just limited by RLIMIT_NPROC.

For various uses cases, it's handy to have the kernel limit the maximum
amount of pending workers for both categories. Provide a way to do with
with a new IORING_REGISTER_IOWQ_MAX_WORKERS operation.

IORING_REGISTER_IOWQ_MAX_WORKERS takes an array of two integers and sets
the max worker count to what is being passed in for each category. The
old values are returned into that same array. If 0 is being passed in for
either category, it simply returns the current value.

The value is capped at RLIMIT_NPROC. This actually isn't that important
as it's more of a hint, if we're exceeding the value then our attempt
to fork a new worker will fail. This happens naturally already if more
than one node is in the system, as these values are per-node internally
for io-wq.

Reported-by: Johannes Lundberg <johalun0@gmail.com>
Link: https://github.com/axboe/liburing/issues/420
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge tag 'floppy-for-5.15' of https://github.com/evdenis/linux-floppy into for-5...
Jens Axboe [Sun, 29 Aug 2021 12:50:58 +0000 (06:50 -0600)]
Merge tag 'floppy-for-5.15' of https://github.com/evdenis/linux-floppy into for-5.15/drivers

Pull floppy fix from Denis:

"Bring back O_NDELAY for floppy

 Only one commit this time with revert of O_NDELAY removal for the floppy.
 Users reported that the commit breaks userspace utils and known floppy
 workflow patterns. We already reverted the same commit back in 2016
 presumably for the same reason. Completely drop O_NDELAY for floppy seems
 excessive to solve problems it introduces.

 I started to write basic selftests for the floppy to prevent this kind of
 userspace breaks in the future.

Signed-off-by: Denis Efremov <efremov@linux.com>"
* tag 'floppy-for-5.15' of https://github.com/evdenis/linux-floppy:
  Revert "floppy: reintroduce O_NDELAY fix"

3 years agoRevert "floppy: reintroduce O_NDELAY fix"
Denis Efremov [Sat, 7 Aug 2021 07:37:02 +0000 (10:37 +0300)]
Revert "floppy: reintroduce O_NDELAY fix"

The patch breaks userspace implementations (e.g. fdutils) and introduces
regressions in behaviour. Previously, it was possible to O_NDELAY open a
floppy device with no media inserted or with write protected media without
an error. Some userspace tools use this particular behavior for probing.

It's not the first time when we revert this patch. Previous revert is in
commit f2791e7eadf4 (Revert "floppy: refactor open() flags handling").

This reverts commit 8a0c014cd20516ade9654fc13b51345ec58e7be8.

Link: https://lore.kernel.org/linux-block/de10cb47-34d1-5a88-7751-225ca380f735@compro.net/
Reported-by: Mark Hounschell <markh@compro.net>
Cc: Jiri Kosina <jkosina@suse.cz>
Cc: Wim Osterholt <wim@djo.tudelft.nl>
Cc: Kurt Garloff <kurt@garloff.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Denis Efremov <efremov@linux.com>
3 years agoMerge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md...
Jens Axboe [Fri, 27 Aug 2021 22:32:01 +0000 (16:32 -0600)]
Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.15/drivers

Pull MD changes from Song.

* 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
  raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
  md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard

3 years agoraid1: ensure write behind bio has less than BIO_MAX_VECS sectors
Guoqing Jiang [Tue, 24 Aug 2021 01:16:54 +0000 (09:16 +0800)]
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors

We can't split write behind bio with more than BIO_MAX_VECS sectors,
otherwise the below call trace was triggered because we could allocate
oversized write behind bio later.

[ 8.097936] bvec_alloc+0x90/0xc0
[ 8.098934] bio_alloc_bioset+0x1b3/0x260
[ 8.099959] raid1_make_request+0x9ce/0xc50 [raid1]
[ 8.100988] ? __bio_clone_fast+0xa8/0xe0
[ 8.102008] md_handle_request+0x158/0x1d0 [md_mod]
[ 8.103050] md_submit_bio+0xcd/0x110 [md_mod]
[ 8.104084] submit_bio_noacct+0x139/0x530
[ 8.105127] submit_bio+0x78/0x1d0
[ 8.106163] ext4_io_submit+0x48/0x60 [ext4]
[ 8.107242] ext4_writepages+0x652/0x1170 [ext4]
[ 8.108300] ? do_writepages+0x41/0x100
[ 8.109338] ? __ext4_mark_inode_dirty+0x240/0x240 [ext4]
[ 8.110406] do_writepages+0x41/0x100
[ 8.111450] __filemap_fdatawrite_range+0xc5/0x100
[ 8.112513] file_write_and_wait_range+0x61/0xb0
[ 8.113564] ext4_sync_file+0x73/0x370 [ext4]
[ 8.114607] __x64_sys_fsync+0x33/0x60
[ 8.115635] do_syscall_64+0x33/0x40
[ 8.116670] entry_SYSCALL_64_after_hwframe+0x44/0xae

Thanks for the comment from Christoph.

[1]. https://bugs.archlinux.org/task/70992

Cc: stable@vger.kernel.org # v5.12+
Reported-by: Jens Stutte <jens@chianterastutte.eu>
Tested-by: Jens Stutte <jens@chianterastutte.eu>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Guoqing Jiang <jiangguoqing@kylinos.cn>
Signed-off-by: Song Liu <songliubraving@fb.com>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Fri, 27 Aug 2021 15:23:18 +0000 (09:23 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: add build check for buf_index overflows
  io_uring: clarify io_req_task_cancel() locking

3 years agoio_uring: add build check for buf_index overflows
Pavel Begunkov [Wed, 25 Aug 2021 19:51:40 +0000 (20:51 +0100)]
io_uring: add build check for buf_index overflows

req->buf_index is u16 and so we rely on registered buffers indexes
fitting into it. Add a build check, so when the upper limit for the
number of buffers is lifted we get a compliation fail but not lurking
problems.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/787e8e1a17cea51ca6301426b1c4c4887b8bd676.1629920396.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: clarify io_req_task_cancel() locking
Pavel Begunkov [Wed, 25 Aug 2021 19:51:39 +0000 (20:51 +0100)]
io_uring: clarify io_req_task_cancel() locking

It's too easy to forget and misjudge about synchronisation in
io_req_task_cancel(), add a comment clarifying it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/71099083835f983a1fd73d5a3da6391924da8300.1629920396.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Fri, 27 Aug 2021 13:30:04 +0000 (07:30 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: add task-refs-get helper

3 years agoio_uring: add task-refs-get helper
Pavel Begunkov [Fri, 27 Aug 2021 10:55:01 +0000 (11:55 +0100)]
io_uring: add task-refs-get helper

As we have a more complicated task referencing, which apart from normal
task references includes taking tctx->inflight and caching all that, it
would be a good idea to have all that isolated in helpers.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d9114d037f1c195897aa13f38a496078eca2afdb.1630023531.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Fri, 27 Aug 2021 13:27:34 +0000 (07:27 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: fix failed linkchain code logic
  io_uring: remove redundant req_set_fail()

3 years agoio_uring: fix failed linkchain code logic
Hao Xu [Fri, 27 Aug 2021 09:46:09 +0000 (17:46 +0800)]
io_uring: fix failed linkchain code logic

Given a linkchain like this:
req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)

There is a problem:
 - if some intermediate linked req like req1 's submittion fails, reqs
   after it won't be cancelled.

   - sqpoll disabled: maybe it's ok since users can get the error info
     of req1 and stop submitting the following sqes.

   - sqpoll enabled: definitely a problem, the following sqes will be
     submitted in the next round.

The solution is to refactor the code logic to:
 - if a linked req's submittion fails, just mark it and the head(if it
   exists) as REQ_F_FAIL. Leverage req->result to indicate whether it
   is failed or cancelled.
 - submit or fail the whole chain when we come to the end of it.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/20210827094609.36052-3-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove redundant req_set_fail()
Hao Xu [Fri, 27 Aug 2021 09:46:08 +0000 (17:46 +0800)]
io_uring: remove redundant req_set_fail()

req_set_fail() in io_submit_sqe() is redundant, remove it.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/20210827094609.36052-2-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomd/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard
Xiao Ni [Wed, 18 Aug 2021 05:57:48 +0000 (13:57 +0800)]
md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard

We are seeing the following warning in raid10_handle_discard.
[  695.110751] =============================
[  695.131439] WARNING: suspicious RCU usage
[  695.151389] 4.18.0-319.el8.x86_64+debug #1 Not tainted
[  695.174413] -----------------------------
[  695.192603] drivers/md/raid10.c:1776 suspicious
rcu_dereference_check() usage!
[  695.225107] other info that might help us debug this:
[  695.260940] rcu_scheduler_active = 2, debug_locks = 1
[  695.290157] no locks held by mkfs.xfs/10186.

In the first loop of function raid10_handle_discard. It already
determines which disk need to handle discard request and add the
rdev reference count rdev->nr_pending. So the conf->mirrors will
not change until all bios come back from underlayer disks. It
doesn't need to use rcu_dereference to get rdev.

Cc: stable@vger.kernel.org
Fixes: d30588b2731f ('md/raid10: improve raid10 discard request')
Signed-off-by: Xiao Ni <xni@redhat.com>
Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev>
Signed-off-by: Song Liu <songliubraving@fb.com>
3 years agoMerge branch 'for-5.15/drivers' into for-next
Jens Axboe [Wed, 25 Aug 2021 20:21:16 +0000 (14:21 -0600)]
Merge branch 'for-5.15/drivers' into for-next

* for-5.15/drivers:
  nbd: remove nbd->destroy_complete
  nbd: only return usable devices from nbd_find_unused
  nbd: set nbd->index before releasing nbd_index_mutex
  nbd: prevent IDR lookups from finding partially initialized devices
  nbd: reset NBD to NULL when restarting in nbd_genl_connect
  nbd: add missing locking to the nbd_dev_add error path

3 years agonbd: remove nbd->destroy_complete
Christoph Hellwig [Wed, 25 Aug 2021 16:31:08 +0000 (18:31 +0200)]
nbd: remove nbd->destroy_complete

The nbd->destroy_complete pointer is not really needed.  For creating
a device without a specific index we now simplify skip devices marked
NBD_DESTROY_ON_DISCONNECT as there is not much point to reuse them.
For device creation with a specific index there is no real need to
treat the case of a requested but not finished disconnect different
than any other device that is being shutdown, i.e. we can just return
an error, as a slightly different race window would anyway.

Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope")
Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reported-by: syzbot+2c98885bcd769f56b6d6@syzkaller.appspotmail.com
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-7-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonbd: only return usable devices from nbd_find_unused
Christoph Hellwig [Wed, 25 Aug 2021 16:31:07 +0000 (18:31 +0200)]
nbd: only return usable devices from nbd_find_unused

Device marked as NBD_DESTROY_ON_DISCONNECT can and should be skipped
given that they won't survive the disconnect.  So skip them and try
to grab a reference directly and just continue if the the devices
is being torn down or created and thus has a zero refcount.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-6-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonbd: set nbd->index before releasing nbd_index_mutex
Tetsuo Handa [Wed, 25 Aug 2021 16:31:06 +0000 (18:31 +0200)]
nbd: set nbd->index before releasing nbd_index_mutex

Set nbd->index before releasing nbd_index_mutex, as populate_nbd_status()
might access nbd->index as soon as nbd_index_mutex is released.

Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope")
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-5-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonbd: prevent IDR lookups from finding partially initialized devices
Tetsuo Handa [Wed, 25 Aug 2021 16:31:05 +0000 (18:31 +0200)]
nbd: prevent IDR lookups from finding partially initialized devices

Previously nbd_index_mutex was held during whole add/remove/lookup
operations in order to guarantee that partially initialized devices are
not reachable via idr_find() or idr_for_each(). But now that partially
initialized devices become reachable as soon as idr_alloc() succeeds,
we need to skip partially initialized devices. Since it seems that
all functions use refcount_inc_not_zero(&nbd->refs) in order to skip
destroying devices, update nbd->refs from zero to non-zero as the last
step of device initialization in order to also skip partially initialized
devices.

Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope")
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
[hch: split from a larger patch, added comments]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-4-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonbd: reset NBD to NULL when restarting in nbd_genl_connect
Christoph Hellwig [Wed, 25 Aug 2021 16:31:04 +0000 (18:31 +0200)]
nbd: reset NBD to NULL when restarting in nbd_genl_connect

When nbd_genl_connect restarts to wait for a disconnecting device, nbd
needs to be reset to NULL.  Do that by facoring out a helper to find
an unused device.

Fixes: 6177b56c96ff ("nbd: refactor device search and allocation in nbd_genl_connect")
Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Reported-by: Hillf Danton <hdanton@sina.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-3-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonbd: add missing locking to the nbd_dev_add error path
Tetsuo Handa [Wed, 25 Aug 2021 16:31:03 +0000 (18:31 +0200)]
nbd: add missing locking to the nbd_dev_add error path

idr_remove needs external synchronization.

Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope")
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
[hch: split from a larger patch]
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210825163108.50713-2-hch@lst.de
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Wed, 25 Aug 2021 19:04:48 +0000 (13:04 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring:
  io_uring: don't free request to slab

3 years agoio_uring: don't free request to slab
Hao Xu [Wed, 25 Aug 2021 17:58:56 +0000 (01:58 +0800)]
io_uring: don't free request to slab

It's not necessary to free the request back to slab when we fail to
get sqe, just move it to state->free_list.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/20210825175856.194299-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/block' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:47:08 +0000 (06:47 -0600)]
Merge branch 'for-5.15/block' into for-next

* for-5.15/block:
  sg: pass the device name to blk_trace_setup
  block, bfq: cleanup the repeated declaration
  blk-crypto: fix check for too-large dun_bytes

3 years agosg: pass the device name to blk_trace_setup
Christoph Hellwig [Wed, 25 Aug 2021 07:54:38 +0000 (09:54 +0200)]
sg: pass the device name to blk_trace_setup

Fix a regression that passed a NULL device name to blk_trace_setup
accidentally.

Fixes: aebbb5831fbd ("sg: do not allocate a gendisk")
Reported-by: syzbot+f74aa89114a236643919@syzkaller.appspotmail.com
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Link: https://lore.kernel.org/r/20210825075438.1883687-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock, bfq: cleanup the repeated declaration
Shaokun Zhang [Wed, 25 Aug 2021 06:19:51 +0000 (14:19 +0800)]
block, bfq: cleanup the repeated declaration

Function 'bfq_entity_to_bfqq' is declared twice, so remove the
repeated declaration and blank line.

Cc: Paolo Valente <paolo.valente@linaro.org>
Cc: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Shaokun Zhang <zhangshaokun@hisilicon.com>
Link: https://lore.kernel.org/r/1629872391-46399-1-git-send-email-zhangshaokun@hisilicon.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-crypto: fix check for too-large dun_bytes
Eric Biggers [Wed, 25 Aug 2021 05:59:18 +0000 (22:59 -0700)]
blk-crypto: fix check for too-large dun_bytes

dun_bytes needs to be less than or equal to the IV size of the
encryption mode, not just less than or equal to BLK_CRYPTO_MAX_IV_SIZE.

Currently this doesn't matter since blk_crypto_init_key() is never
actually passed invalid values, but we might as well fix this.

Fixes: a892c8d52c02 ("block: Inline encryption support for blk-mq")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Link: https://lore.kernel.org/r/20210825055918.51975-1-ebiggers@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/libata' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:37:41 +0000 (06:37 -0600)]
Merge branch 'for-5.15/libata' into for-next

* for-5.15/libata:
  include:libata: fix boolreturn.cocci warnings

3 years agoMerge branch 'for-5.15/block' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:37:36 +0000 (06:37 -0600)]
Merge branch 'for-5.15/block' into for-next

* for-5.15/block:
  blk-zoned: allow BLKREPORTZONE without CAP_SYS_ADMIN
  blk-zoned: allow zone management send operations without CAP_SYS_ADMIN
  block: mark blkdev_fsync static
  block: refine the disk_live check in del_gendisk
  mmc: sdhci-tegra: Enable MMC_CAP2_ALT_GPT_TEGRA
  mmc: block: Support alternative_gpt_sector() operation
  partitions/efi: Support non-standard GPT location
  block: Add alternative_gpt_sector() operation
  bio: fix page leak bio_add_hw_page failure
  block: remove CONFIG_DEBUG_BLOCK_EXT_DEVT
  block: remove a pointless call to MINOR() in device_add_disk

3 years agoMerge branch 'io_uring-bio-cache.5' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:37:32 +0000 (06:37 -0600)]
Merge branch 'io_uring-bio-cache.5' into for-next

* io_uring-bio-cache.5:
  bio: improve kerneldoc documentation for bio_alloc_kiocb()
  block: provide bio_clear_hipri() helper
  block: use the percpu bio cache in __blkdev_direct_IO
  io_uring: enable use of bio alloc cache
  block: clear BIO_PERCPU_CACHE flag if polling isn't supported
  bio: add allocation cache abstraction
  fs: add kiocb alloc cache flag
  bio: optimize initialization of a bio

3 years agoMerge branch 'for-5.15/io_uring-vfs' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:37:29 +0000 (06:37 -0600)]
Merge branch 'for-5.15/io_uring-vfs' into for-next

* for-5.15/io_uring-vfs:
  io_uring: add support for IORING_OP_LINKAT
  io_uring: add support for IORING_OP_SYMLINKAT
  io_uring: add support for IORING_OP_MKDIRAT
  namei: update do_*() helpers to return ints
  namei: make do_linkat() take struct filename
  namei: add getname_uflags()
  namei: make do_symlinkat() take struct filename
  namei: make do_mknodat() take struct filename
  namei: make do_mkdirat() take struct filename
  namei: change filename_parentat() calling conventions
  namei: ignore ERR/NULL names in putname()

3 years agoMerge branch 'for-5.15/io_uring' into for-next
Jens Axboe [Wed, 25 Aug 2021 12:37:26 +0000 (06:37 -0600)]
Merge branch 'for-5.15/io_uring' into for-next

* for-5.15/io_uring: (74 commits)
  io_uring: accept directly into fixed file table
  io_uring: hand code io_accept() fd installing
  io_uring: openat directly into fixed fd table
  net: add accept helper not installing fd
  io_uring: fix io_try_cancel_userdata race for iowq
  io_uring: IRQ rw completion batching
  io_uring: batch task work locking
  io_uring: flush completions for fallbacks
  io_uring: add ->splice_fd_in checks
  io_uring: add clarifying comment for io_cqring_ev_posted()
  io_uring: place fixed tables under memcg limits
  io_uring: limit fixed table size by RLIMIT_NOFILE
  io_uring: fix lack of protection for compl_nr
  io_uring: Add register support for non-4k PAGE_SIZE
  io_uring: extend task put optimisations
  io_uring: add comments on why PF_EXITING checking is safe
  io-wq: move nr_running and worker_refs out of wqe->lock protection
  io_uring: fix io_timeout_remove locking
  io_uring: improve same wq polling
  io_uring: reuse io_req_complete_post()
  ...

3 years agoio_uring: accept directly into fixed file table
Pavel Begunkov [Wed, 25 Aug 2021 11:25:47 +0000 (12:25 +0100)]
io_uring: accept directly into fixed file table

As done with open opcodes, allow accept to skip installing fd into
processes' file tables and put it directly into io_uring's fixed file
table. Same restrictions and design as for open.

Suggested-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Link: https://lore.kernel.org/r/6d16163f376fac7ac26a656de6b42199143e9721.1629888991.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: hand code io_accept() fd installing
Pavel Begunkov [Wed, 25 Aug 2021 11:25:46 +0000 (12:25 +0100)]
io_uring: hand code io_accept() fd installing

Make io_accept() to handle file descriptor allocations and installation.
A preparation patch for bypassing file tables.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Link: https://lore.kernel.org/r/5b73d204caa0ce979ccb98136695b60f52a3d98c.1629888991.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: openat directly into fixed fd table
Pavel Begunkov [Wed, 25 Aug 2021 11:25:45 +0000 (12:25 +0100)]
io_uring: openat directly into fixed fd table

Instead of opening a file into a process's file table as usual and then
registering the fd within io_uring, some users may want to skip the
first step and place it directly into io_uring's fixed file table.
This patch adds such a capability for IORING_OP_OPENAT and
IORING_OP_OPENAT2.

The behaviour is controlled by setting sqe->file_index, where 0 implies
the old behaviour using normal file tables. If non-zero value is
specified, then it will behave as described and place the file into a
fixed file slot sqe->file_index - 1. A file table should be already
created, the slot should be valid and empty, otherwise the operation
will fail.

Keep the error codes consistent with IORING_OP_FILES_UPDATE, ENXIO and
EINVAL on inappropriate fixed tables, and return EBADF on collision with
already registered file.

Note: IOSQE_FIXED_FILE can't be used to switch between modes, because
accept takes a file, and it already uses the flag with a different
meaning.

Suggested-by: Josh Triplett <josh@joshtriplett.org>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Link: https://lore.kernel.org/r/e9b33d1163286f51ea707f87d95bd596dada1e65.1629888991.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonet: add accept helper not installing fd
Pavel Begunkov [Wed, 25 Aug 2021 11:25:44 +0000 (12:25 +0100)]
net: add accept helper not installing fd

Introduce and reuse a helper that acts similarly to __sys_accept4_file()
but returns struct file instead of installing file descriptor. Will be
used by io_uring.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Acked-by: David S. Miller <davem@davemloft.net>
Link: https://lore.kernel.org/r/c57b9e8e818d93683a3d24f8ca50ca038d1da8c4.1629888991.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-zoned: allow BLKREPORTZONE without CAP_SYS_ADMIN
Niklas Cassel [Wed, 11 Aug 2021 11:05:19 +0000 (11:05 +0000)]
blk-zoned: allow BLKREPORTZONE without CAP_SYS_ADMIN

A user space process should not need the CAP_SYS_ADMIN capability set
in order to perform a BLKREPORTZONE ioctl.

Getting the zone report is required in order to get the write pointer.
Neither read() nor write() requires CAP_SYS_ADMIN, so it is reasonable
that a user space process that can read/write from/to the device, also
can get the write pointer. (Since e.g. writes have to be at the write
pointer.)

Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Aravind Ramesh <aravind.ramesh@wdc.com>
Reviewed-by: Adam Manzanares <a.manzanares@samsung.com>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: stable@vger.kernel.org # v4.10+
Link: https://lore.kernel.org/r/20210811110505.29649-3-Niklas.Cassel@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-zoned: allow zone management send operations without CAP_SYS_ADMIN
Niklas Cassel [Wed, 11 Aug 2021 11:05:18 +0000 (11:05 +0000)]
blk-zoned: allow zone management send operations without CAP_SYS_ADMIN

Zone management send operations (BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE
and BLKFINISHZONE) should be allowed under the same permissions as write().
(write() does not require CAP_SYS_ADMIN).

Additionally, other ioctls like BLKSECDISCARD and BLKZEROOUT only check if
the fd was successfully opened with FMODE_WRITE.
(They do not require CAP_SYS_ADMIN).

Currently, zone management send operations require both CAP_SYS_ADMIN
and that the fd was successfully opened with FMODE_WRITE.

Remove the CAP_SYS_ADMIN requirement, so that zone management send
operations match the access control requirement of write(), BLKSECDISCARD
and BLKZEROOUT.

Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls")
Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com>
Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com>
Reviewed-by: Aravind Ramesh <aravind.ramesh@wdc.com>
Reviewed-by: Adam Manzanares <a.manzanares@samsung.com>
Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Cc: stable@vger.kernel.org # v4.10+
Link: https://lore.kernel.org/r/20210811110505.29649-2-Niklas.Cassel@wdc.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoinclude:libata: fix boolreturn.cocci warnings
Jing Yangyang [Tue, 24 Aug 2021 06:07:02 +0000 (23:07 -0700)]
include:libata: fix boolreturn.cocci warnings

./include/linux/libata.h:1462:8-9:WARNING: return of 0/1 in function
'ata_is_host_link' with return type bool

Return statements in functions returning bool should use true/false
instead of 1/0.

Generated by: scripts/coccinelle/misc/boolreturn.cocci

Reported-by: Zeal Robot <zealci@zte.com.cn>
Signed-off-by: Jing Yangyang <jing.yangyang@zte.com.cn>
Link: https://lore.kernel.org/r/20210824060702.59006-1-deng.changcheng@zte.com.cn
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: mark blkdev_fsync static
Christoph Hellwig [Tue, 24 Aug 2021 15:18:23 +0000 (17:18 +0200)]
block: mark blkdev_fsync static

blkdev_fsync is only used inside of block_dev.c since the
removal of the raw drÑ–ver.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210824151823.1575100-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: refine the disk_live check in del_gendisk
Christoph Hellwig [Tue, 24 Aug 2021 14:43:10 +0000 (16:43 +0200)]
block: refine the disk_live check in del_gendisk

hidden gendisks will never be marked live.

Fixes: 40b3a52ffc5b ("block: add a sanity check for a live disk in del_gendisk")
Reported-by: Bruno Goncalves <bgoncalv@redhat.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210824144310.1487816-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agommc: sdhci-tegra: Enable MMC_CAP2_ALT_GPT_TEGRA
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:36 +0000 (03:45 +0300)]
mmc: sdhci-tegra: Enable MMC_CAP2_ALT_GPT_TEGRA

Tegra20/30/114/124 Android devices place GPT at a non-standard location.
Enable GPT entry scanning at that location.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Acked-by: Thierry Reding <treding@nvidia.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://lore.kernel.org/r/20210820004536.15791-5-digetx@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agommc: block: Support alternative_gpt_sector() operation
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:35 +0000 (03:45 +0300)]
mmc: block: Support alternative_gpt_sector() operation

Support generic alternative_gpt_sector() block device operation.
It calculates location of GPT entry for eMMC of NVIDIA Tegra Android
devices. Add new MMC_CAP2_ALT_GPT_TEGRA flag that enables scanning of
alternative GPT sector and add raw_boot_mult field to mmc_ext_csd
which allows to get size of the boot partitions that is needed for
the calculation.

Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://lore.kernel.org/r/20210820004536.15791-4-digetx@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agopartitions/efi: Support non-standard GPT location
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:34 +0000 (03:45 +0300)]
partitions/efi: Support non-standard GPT location

Support looking up GPT at a non-standard location specified by a block
device driver.

Acked-by: Davidlohr Bueso <dbueso@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://lore.kernel.org/r/20210820004536.15791-3-digetx@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: Add alternative_gpt_sector() operation
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:33 +0000 (03:45 +0300)]
block: Add alternative_gpt_sector() operation

Add alternative_gpt_sector() block device operation which specifies
alternative location of a GPT entry. This allows us to support Android
devices that have GPT entry at a non-standard location and can't be
repartitioned easily.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Suggested-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dmitry Osipenko <digetx@gmail.com>
Reviewed-by: Ulf Hansson <ulf.hansson@linaro.org>
Link: https://lore.kernel.org/r/20210820004536.15791-2-digetx@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobio: fix page leak bio_add_hw_page failure
Pavel Begunkov [Mon, 19 Jul 2021 10:53:00 +0000 (11:53 +0100)]
bio: fix page leak bio_add_hw_page failure

__bio_iov_append_get_pages() doesn't put not appended pages on
bio_add_hw_page() failure, so potentially leaking them, fix it. Also, do
the same for __bio_iov_iter_get_pages(), even though it looks like it
can't be triggered by userspace in this case.

Fixes: 0512a75b98f8 ("block: Introduce REQ_OP_ZONE_APPEND")
Cc: stable@vger.kernel.org # 5.8+
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/1edfa6a2ffd66d55e6345a477df5387d2c1415d0.1626653825.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove CONFIG_DEBUG_BLOCK_EXT_DEVT
Christoph Hellwig [Tue, 24 Aug 2021 07:52:16 +0000 (09:52 +0200)]
block: remove CONFIG_DEBUG_BLOCK_EXT_DEVT

This might have been a neat debug aid when the extended dev_t was
added, but that time is long gone.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210824075216.1179406-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove a pointless call to MINOR() in device_add_disk
Christoph Hellwig [Tue, 24 Aug 2021 07:52:15 +0000 (09:52 +0200)]
block: remove a pointless call to MINOR() in device_add_disk

blk_alloc_ext_minor already returns just a minor number, so no need to
mask the high bits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20210824075216.1179406-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add support for IORING_OP_LINKAT
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:47 +0000 (13:34 +0700)]
io_uring: add support for IORING_OP_LINKAT

IORING_OP_LINKAT behaves like linkat(2) and takes the same flags and
arguments.

In some internal places 'hardlink' is used instead of 'link' to avoid
confusion with the SQE links. Name 'link' conflicts with the existing
'link' member of io_kiocb.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Suggested-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/io-uring/20210514145259.wtl4xcsp52woi6ab@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-12-dkadashev@gmail.com
[axboe: add splice_fd_in check]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add support for IORING_OP_SYMLINKAT
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:46 +0000 (13:34 +0700)]
io_uring: add support for IORING_OP_SYMLINKAT

IORING_OP_SYMLINKAT behaves like symlinkat(2) and takes the same flags
and arguments.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Suggested-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/io-uring/20210514145259.wtl4xcsp52woi6ab@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-11-dkadashev@gmail.com
[axboe: add splice_fd_in check]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/libata' into for-next
Jens Axboe [Mon, 23 Aug 2021 19:47:16 +0000 (13:47 -0600)]
Merge branch 'for-5.15/libata' into for-next

* for-5.15/libata:
  docs: sysfs-block-device: document ncq_prio_supported
  docs: sysfs-block-device: improve ncq_prio_enable documentation
  libata: Introduce ncq_prio_supported sysfs sttribute
  libata: print feature list on device scan
  libata: fix ata_read_log_page() warning
  libata: cleanup NCQ priority handling
  libata: cleanup ata_dev_configure()
  libata: cleanup device sleep capability detection
  libata: simplify ata_scsi_rbuf_fill()
  libata: fix ata_host_start()
  ata: sata_dwc_460ex: No need to call phy_exit() befre phy_init()

3 years agoMerge branch 'for-5.15/drivers' into for-next
Jens Axboe [Mon, 23 Aug 2021 19:47:12 +0000 (13:47 -0600)]
Merge branch 'for-5.15/drivers' into for-next

* for-5.15/drivers: (33 commits)
  nvme: remove the unused NVME_NS_* enum
  nvme: remove nvm_ndev from ns
  nvme: Have NVME_FABRICS select NVME_CORE instead of transport drivers
  block: nbd: add sanity check for first_minor
  nvmet: check that host sqsize does not exceed ctrl MQES
  nvmet: avoid duplicate qid in connect cmd
  nvmet: pass back cntlid on successful completion
  nvme-rdma: don't update queue count when failing to set io queues
  nvme-tcp: don't update queue count when failing to set io queues
  nvme-tcp: pair send_mutex init with destroy
  nvme: allow user toggling hmb usage
  nvme-pci: disable hmb on idle suspend
  nvmet: remove redundant assignments of variable status
  nvmet: add set feature tracing support
  nvme: add set feature tracing support
  nvme-fabrics: remove superfluous nvmf_host_put in nvmf_parse_options
  nvme-pci: cmb sysfs: one file, one value
  nvme-pci: use attribute group for cmb sysfs
  nvme: code command_id with a genctr for use-after-free validation
  nvme-tcp: don't check blk_mq_tag_to_rq when receiving pdu data
  ...

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.15/block' into for-next
Jens Axboe [Mon, 23 Aug 2021 19:46:55 +0000 (13:46 -0600)]
Merge branch 'for-5.15/block' into for-next

* for-5.15/block: (115 commits)
  null_blk: add error handling support for add_disk()
  virtio_blk: add error handling support for add_disk()
  block: add error handling for device_add_disk / add_disk
  block: return errors from disk_alloc_events
  block: return errors from blk_integrity_add
  block: call blk_register_queue earlier in device_add_disk
  block: call blk_integrity_add earlier in device_add_disk
  block: create the bdi link earlier in device_add_disk
  block: call bdev_add later in device_add_disk
  block: fold register_disk into device_add_disk
  block: add a sanity check for a live disk in del_gendisk
  block: add an explicit ->disk backpointer to the request_queue
  block: hold a request_queue reference for the lifetime of struct gendisk
  block: pass a request_queue to __blk_alloc_disk
  block: remove the minors argument to __alloc_disk_node
  block: remove alloc_disk and alloc_disk_node
  block: cleanup the lockdep handling in *alloc_disk
  sg: do not allocate a gendisk
  st: do not allocate a gendisk
  nvme: use blk_mq_alloc_disk
  ...

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobio: improve kerneldoc documentation for bio_alloc_kiocb()
Jens Axboe [Fri, 13 Aug 2021 13:53:09 +0000 (07:53 -0600)]
bio: improve kerneldoc documentation for bio_alloc_kiocb()

We're missing a description for the 'nr_vecs' parameter. While in there,
clarify that freeing a bio allocated through this function must be done
from process context.

Fixes: 1cbbd31c4ada ("bio: add allocation cache abstraction")
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: provide bio_clear_hipri() helper
Jens Axboe [Thu, 12 Aug 2021 17:42:53 +0000 (11:42 -0600)]
block: provide bio_clear_hipri() helper

Any case that turns off REQ_HIPRI must also clear BIO_PERCPU_CACHE,
as non-polled IO may complete through hard/soft IRQ and hence isn't
safe for our polled bio alloc cache.

Provide a helper that does just that, and use it in the merging code as
well if we split a bio and turn off polling.

Fixes: be863b9e4348 ("block: clear BIO_PERCPU_CACHE flag if polling isn't supported")
Reported-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: use the percpu bio cache in __blkdev_direct_IO
Christoph Hellwig [Thu, 12 Aug 2021 15:05:52 +0000 (09:05 -0600)]
block: use the percpu bio cache in __blkdev_direct_IO

Use bio_alloc_kiocb to dip into the percpu cache of bios when the
caller asks for it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: enable use of bio alloc cache
Jens Axboe [Mon, 8 Mar 2021 18:40:23 +0000 (11:40 -0700)]
io_uring: enable use of bio alloc cache

Mark polled IO as being safe for dipping into the bio allocation
cache, in case the targeted bio_set has it enabled.

This brings an IOPOLL gen2 Optane QD=128 workload from ~3.2M IOPS to
~3.5M IOPS.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: clear BIO_PERCPU_CACHE flag if polling isn't supported
Jens Axboe [Wed, 11 Aug 2021 16:19:06 +0000 (10:19 -0600)]
block: clear BIO_PERCPU_CACHE flag if polling isn't supported

The bio alloc cache relies on the fact that a polled bio will complete
in process context, clear the cacheable flag if we disable polling
for a given bio.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobio: add allocation cache abstraction
Jens Axboe [Mon, 8 Mar 2021 18:37:47 +0000 (11:37 -0700)]
bio: add allocation cache abstraction

Add a per-cpu bio_set cache for bio allocations, enabling us to quickly
recycle them instead of going through the slab allocator. This cache
isn't IRQ safe, and hence is only really suitable for polled IO.

Very simple - keeps a count of bio's in the cache, and maintains a max
of 512 with a slack of 64. If we get above max + slack, we drop slack
number of bio's.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofs: add kiocb alloc cache flag
Jens Axboe [Tue, 10 Aug 2021 15:29:55 +0000 (09:29 -0600)]
fs: add kiocb alloc cache flag

If this kiocb can safely use the polled bio allocation cache, then this
flag must be set. Generally this can be set for polled IO, where we will
not see IRQ completions of the request.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobio: optimize initialization of a bio
Jens Axboe [Wed, 11 Aug 2021 15:20:04 +0000 (09:20 -0600)]
bio: optimize initialization of a bio

The memset() used is measurably slower in targeted benchmarks, wasting
about 1% of the total runtime, or 50% of the (later) hot path cached
bio alloc. Get rid of it and fill in the bio manually.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix io_try_cancel_userdata race for iowq
Pavel Begunkov [Mon, 23 Aug 2021 12:30:44 +0000 (13:30 +0100)]
io_uring: fix io_try_cancel_userdata race for iowq

WARNING: CPU: 1 PID: 5870 at fs/io_uring.c:5975 io_try_cancel_userdata+0x30f/0x540 fs/io_uring.c:5975
CPU: 0 PID: 5870 Comm: iou-wrk-5860 Not tainted 5.14.0-rc6-next-20210820-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:io_try_cancel_userdata+0x30f/0x540 fs/io_uring.c:5975
Call Trace:
 io_async_cancel fs/io_uring.c:6014 [inline]
 io_issue_sqe+0x22d5/0x65a0 fs/io_uring.c:6407
 io_wq_submit_work+0x1dc/0x300 fs/io_uring.c:6511
 io_worker_handle_work+0xa45/0x1840 fs/io-wq.c:533
 io_wqe_worker+0x2cc/0xbb0 fs/io-wq.c:582
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295

io_try_cancel_userdata() can be called from io_async_cancel() executing
in the io-wq context, so the warning fires, which is there to alert
anyone accessing task->io_uring->io_wq in a racy way. However,
io_wq_put_and_exit() always first waits for all threads to complete,
so the only detail left is to zero tctx->io_wq after the context is
removed.

note: one little assumption is that when IO_WQ_WORK_CANCEL, the executor
won't touch ->io_wq, because io_wq_destroy() might cancel left pending
requests in such a way.

Cc: stable@vger.kernel.org
Reported-by: syzbot+b0c9d1588ae92866515f@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/dfdd37a80cfa9ffd3e59538929c99cdd55d8699e.1629721757.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add support for IORING_OP_MKDIRAT
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:45 +0000 (13:34 +0700)]
io_uring: add support for IORING_OP_MKDIRAT

IORING_OP_MKDIRAT behaves like mkdirat(2) and takes the same flags
and arguments.

Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-10-dkadashev@gmail.com
[axboe: add splice_fd_in check]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: update do_*() helpers to return ints
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:44 +0000 (13:34 +0700)]
namei: update do_*() helpers to return ints

Update the following to return int rather than long, for uniformity with
the rest of the do_* helpers in namei.c:

* do_rmdir()
* do_unlinkat()
* do_mkdirat()
* do_mknodat()
* do_symlinkat()

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/20210514143202.dmzfcgz5hnauy7ze@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-9-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: make do_linkat() take struct filename
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:43 +0000 (13:34 +0700)]
namei: make do_linkat() take struct filename

Pass in the struct filename pointers instead of the user string, for
uniformity with do_renameat2, do_unlinkat, do_mknodat, etc.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/20210330071700.kpjoyp5zlni7uejm@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-8-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: add getname_uflags()
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:42 +0000 (13:34 +0700)]
namei: add getname_uflags()

There are a couple of places where we already open-code the (flags &
AT_EMPTY_PATH) check and io_uring will likely add another one in the
future.  Let's just add a simple helper getname_uflags() that handles
this directly and use it.

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/20210415100815.edrn4a7cy26wkowe@wittgenstein/
Signed-off-by: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-7-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: make do_symlinkat() take struct filename
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:41 +0000 (13:34 +0700)]
namei: make do_symlinkat() take struct filename

Pass in the struct filename pointers instead of the user string, for
uniformity with the recently converted do_mkdnodat(), do_unlinkat(),
do_renameat(), do_mkdirat().

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/20210330071700.kpjoyp5zlni7uejm@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-6-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: make do_mknodat() take struct filename
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:40 +0000 (13:34 +0700)]
namei: make do_mknodat() take struct filename

Pass in the struct filename pointers instead of the user string, for
uniformity with the recently converted do_unlinkat(), do_renameat(),
do_mkdirat().

Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/20210330071700.kpjoyp5zlni7uejm@wittgenstein/
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-5-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: make do_mkdirat() take struct filename
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:39 +0000 (13:34 +0700)]
namei: make do_mkdirat() take struct filename

Pass in the struct filename pointers instead of the user string, and
update the three callers to do the same. This is heavily based on
commit dbea8d345177 ("fs: make do_renameat2() take struct filename").

This behaves like do_unlinkat() and do_renameat2().

Cc: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-4-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: change filename_parentat() calling conventions
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:38 +0000 (13:34 +0700)]
namei: change filename_parentat() calling conventions

Since commit 5c31b6cedb675 ("namei: saner calling conventions for
filename_parentat()") filename_parentat() had the following behavior WRT
the passed in struct filename *:

* On error the name is consumed (putname() is called on it);
* On success the name is returned back as the return value;

Now there is a need for filename_create() and filename_lookup() variants
that do not consume the passed filename, and following the same "consume
the name only on error" semantics is proven to be hard to reason about
and result in confusing code.

Hence this preparation change splits filename_parentat() into two: one
that always consumes the name and another that never consumes the name.
This will allow to implement two filename_create() variants in the same
way, and is a consistent and hopefully easier to reason about approach.

Link: https://lore.kernel.org/io-uring/CAOKbgA7MiqZAq3t-HDCpSGUFfco4hMA9ArAE-74fTpU+EkvKPw@mail.gmail.com/
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-3-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonamei: ignore ERR/NULL names in putname()
Dmitry Kadashev [Thu, 8 Jul 2021 06:34:37 +0000 (13:34 +0700)]
namei: ignore ERR/NULL names in putname()

Supporting ERR/NULL names in putname() makes callers code cleaner, and
is what some other path walking functions already support for the same
reason.

This also removes a few existing IS_ERR checks before putname().

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/io-uring/CAHk-=wgCac9hBsYzKMpHk0EbLgQaXR=OUAjHaBtaY+G8A9KhFg@mail.gmail.com/
Acked-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <christian.brauner@ubuntu.com>
Signed-off-by: Dmitry Kadashev <dkadashev@gmail.com>
Link: https://lore.kernel.org/r/20210708063447.3556403-2-dkadashev@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: IRQ rw completion batching
Pavel Begunkov [Wed, 18 Aug 2021 11:42:47 +0000 (12:42 +0100)]
io_uring: IRQ rw completion batching

Employ inline completion logic for read/write completions done via
io_req_task_complete(). If ->uring_lock is contended, just do normal
request completion, but if not, make tctx_task_work() to grab the lock
and do batched inline completions in io_req_task_complete().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/94589c3ce69eaed86a21bb1ec696407a54fab1aa.1629286357.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: batch task work locking
Pavel Begunkov [Wed, 18 Aug 2021 11:42:46 +0000 (12:42 +0100)]
io_uring: batch task work locking

Many task_work handlers either grab ->uring_lock, or may benefit from
having it. Move locking logic out of individual handlers to a lazy
approach controlled by tctx_task_work(), so we don't keep doing
tons of mutex lock/unlock.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d6a34e147f2507a2f3e2fa1e38a9c541dcad3929.1629286357.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: flush completions for fallbacks
Pavel Begunkov [Wed, 18 Aug 2021 11:42:45 +0000 (12:42 +0100)]
io_uring: flush completions for fallbacks

io_fallback_req_func() doesn't expect anyone creating inline
completions, and no one currently does that. Teach the function to flush
completions preparing for further changes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b941516921f72e1a64d58932d671736892d7fff.1629286357.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add ->splice_fd_in checks
Pavel Begunkov [Fri, 20 Aug 2021 09:36:37 +0000 (10:36 +0100)]
io_uring: add ->splice_fd_in checks

->splice_fd_in is used only by splice/tee, but no other request checks
it for validity. Add the check for most of request types excluding
reads/writes/sends/recvs, we don't want overhead for them and can leave
them be as is until the field is actually used.

Cc: stable@vger.kernel.org
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/f44bc2acd6777d932de3d71a5692235b5b2b7397.1629451684.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add clarifying comment for io_cqring_ev_posted()
Jens Axboe [Sat, 21 Aug 2021 13:21:19 +0000 (07:21 -0600)]
io_uring: add clarifying comment for io_cqring_ev_posted()

We've previously had an issue where overflow flush unconditionally calls
io_cqring_ev_posted() even if it didn't flush any events to the ring,
causing wake and eventfd increment where no new events are available.
Some applications don't like that, see commit b18032bb0a88 for details.

This came up in discussion for another patch recently, hence add a
comment detailing what the relationship between calling the events
posted helper and CQ ring entries is.

Link: https://lore.kernel.org/io-uring/77a44fce-c831-16a6-8e80-9aee77f496a2@kernel.dk/
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: place fixed tables under memcg limits
Pavel Begunkov [Fri, 20 Aug 2021 09:36:36 +0000 (10:36 +0100)]
io_uring: place fixed tables under memcg limits

Fixed tables may be large enough, place all of them together with
allocated tags under memcg limits.

Cc: stable@vger.kernel.org
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b3ac9f5da9821bb59837b5fe25e8ef4be982218c.1629451684.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: limit fixed table size by RLIMIT_NOFILE
Pavel Begunkov [Fri, 20 Aug 2021 09:36:35 +0000 (10:36 +0100)]
io_uring: limit fixed table size by RLIMIT_NOFILE

Limit the number of files in io_uring fixed tables by RLIMIT_NOFILE,
that's the first and the simpliest restriction that we should impose.

Cc: stable@vger.kernel.org
Suggested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b2756c340aed7d6c0b302c26dab50c6c5907f4ce.1629451684.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix lack of protection for compl_nr
Hao Xu [Fri, 20 Aug 2021 22:19:54 +0000 (06:19 +0800)]
io_uring: fix lack of protection for compl_nr

coml_nr in ctx_flush_and_put() is not protected by uring_lock, this
may cause problems when accessing in parallel:

say coml_nr > 0

  ctx_flush_and put                  other context
   if (compl_nr)                      get mutex
                                      coml_nr > 0
                                      do flush
                                          coml_nr = 0
                                      release mutex
        get mutex
           do flush (*)
        release mutex

in (*) place, we call io_cqring_ev_posted() and users likely get
no events there. To avoid spurious events, re-check the value when
under the lock.

Fixes: 2c32395d8111 ("io_uring: fix __tctx_task_work() ctx race")
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20210820221954.61815-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: Add register support for non-4k PAGE_SIZE
wangyangbo [Thu, 19 Aug 2021 05:56:57 +0000 (13:56 +0800)]
io_uring: Add register support for non-4k PAGE_SIZE

Now allocated rsrc table uses PAGE_SIZE as the size of 2nd-level, and
accessing this table relies on each level index from fixed TABLE_SHIFT
(12 - 3) in 4k page case. In order to correctly work in non-4k page,
define TABLE_SHIFT as non-fixed (PAGE_SHIFT - shift of data) for
2nd-level table entry number.

Signed-off-by: wangyangbo <wangyangbo@uniontech.com>
Link: https://lore.kernel.org/r/20210819055657.27327-1-wangyangbo@uniontech.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>