Christoph Hellwig [Tue, 31 Aug 2021 07:32:21 +0000 (09:32 +0200)]
md: extend disks_mutex coverage
disks_mutex is intended to serialize md_alloc. Extended it to also cover
the kobject_uevent call and getting the sysfs dirent to help reducing
error handling complexity.
Christoph Hellwig [Mon, 30 Aug 2021 10:21:23 +0000 (12:21 +0200)]
md: add the bitmap group to the default groups for the md kobject
Replace the deprecated default_attrs with the default_groups mechanism,
and add the always visible bitmap group to the groups created add
kobject_add time.
Christoph Hellwig [Mon, 30 Aug 2021 10:27:50 +0000 (12:27 +0200)]
md: fix a lock order reversal
Commit b0140891a8cea3 ("md: Fix race when creating a new md device.")
not only moved assigning mddev->gendisk before calling add_disk, which
fixes the races described in the commit log, but also added a
mddev->open_mutex critical section over add_disk and creation of the
md kobj. Adding a kobject after add_disk is racy vs deleting the gendisk
right after adding it, but md already prevents against that by holding
a mddev->active reference.
On the other hand taking this lock added a lock order reversal with what
is not disk->open_mutex (used to be bdev->bd_mutex when the commit was
added) for partition devices, which need that lock for the internal open
for the partition scan, and a recent commit also takes it for
non-partitioned devices, leading to further lockdep splatter.
Fixes: b0140891a8ce ("md: Fix race when creating a new md device.") Fixes: d62633873590 ("block: support delayed holder registration") Reported-by: syzbot+fadc0aaf497e6a493b9f@syzkaller.appspotmail.com Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: syzbot+fadc0aaf497e6a493b9f@syzkaller.appspotmail.com
Jens Axboe [Tue, 31 Aug 2021 01:37:41 +0000 (19:37 -0600)]
io_uring: IORING_OP_WRITE needs hash_reg_file set
During some testing, it became evident that using IORING_OP_WRITE doesn't
hash buffered writes like the other writes commands do. That's simply
an oversight, and can cause performance regressions when doing buffered
writes with this command.
Correct that and add the flag, so that buffered writes are correctly
hashed when using the non-iovec based write command.
Jens Axboe [Mon, 30 Aug 2021 20:17:12 +0000 (14:17 -0600)]
io-wq: split bounded and unbounded work into separate lists
We've got a few issues that all boil down to the fact that we have one
list of pending work items, yet two different types of workers to
serve them. This causes some oddities around workers switching type and
even hashed work vs regular work on the same bounded list.
Just separate them out cleanly, similarly to how we already do
accounting of what is running. That provides a clean separation and
removes some corner cases that can cause stalls when handling IO
that is punted to io-wq.
Fixes: ecc53c48c13d ("io-wq: check max_worker limits if a worker transitions bound state") Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 30 Aug 2021 17:55:22 +0000 (11:55 -0600)]
io-wq: fix race between adding work and activating a free worker
The attempt to find and activate a free worker for new work is currently
combined with creating a new one if we don't find one, but that opens
io-wq up to a race where the worker that is found and activated can
put itself to sleep without knowing that it has been selected to perform
this new work.
Fix this by moving the activation into where we add the new work item,
then we can retain it within the wqe->lock scope and elimiate the race
with the worker itself checking inside the lock, but sleeping outside of
it.
Cc: stable@vger.kernel.org Reported-by: Andres Freund <andres@anarazel.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 30 Aug 2021 13:45:47 +0000 (07:45 -0600)]
io-wq: fix wakeup race when adding new work
When new work is added, io_wqe_enqueue() checks if we need to wake or
create a new worker. But that check is done outside the lock that
otherwise synchronizes us with a worker going to sleep, so we can end
up in the following situation:
CPU0 CPU1
lock
insert work
unlock
atomic_read(nr_running) != 0
lock
atomic_dec(nr_running)
no wakeup needed
Hold the wqe lock around the "need to wakeup" check. Then we can also get
rid of the temporary work_flags variable, as we know the work will remain
valid as long as we hold the lock.
Cc: stable@vger.kernel.org Reported-by: Andres Freund <andres@anarazel.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 30 Aug 2021 12:33:08 +0000 (06:33 -0600)]
io-wq: wqe and worker locks no longer need to be IRQ safe
io_uring no longer queues async work off completion handlers that run in
hard or soft interrupt context, and that use case was the only reason that
io-wq had to use IRQ safe locks for wqe and worker locks.
Jens Axboe [Sun, 29 Aug 2021 22:13:03 +0000 (16:13 -0600)]
io-wq: check max_worker limits if a worker transitions bound state
For the two places where new workers are created, we diligently check if
we are allowed to create a new worker. If we're currently at the limit
of how many workers of a given type we can have, then we don't create
any new ones.
If you have a mixed workload with various types of bound and unbounded
work, then it can happen that a worker finishes one type of work and
is then transitioned to the other type. For this case, we don't check
if we are actually allowed to do so. This can cause io-wq to temporarily
exceed the allowed number of workers for a given type.
When retrieving work, check that the types match. If they don't, check
if we are allowed to transition to the other type. If not, then don't
handle the new work.
Cc: stable@vger.kernel.org Reported-by: Johannes Lundberg <johalun0@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Sun, 29 Aug 2021 22:12:49 +0000 (16:12 -0600)]
Merge branch 'for-5.15/drivers' into for-next
* for-5.15/drivers:
Revert "floppy: reintroduce O_NDELAY fix"
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard
Jens Axboe [Sun, 29 Aug 2021 22:12:47 +0000 (16:12 -0600)]
Merge branch 'for-5.15/io_uring' into for-next
* for-5.15/io_uring:
io_uring: allow updating linked timeouts
io_uring: keep ltimeouts in a list
io_uring: support CLOCK_BOOTTIME/REALTIME for timeouts
io-wq: provide a way to limit max number of workers
Jens Axboe [Fri, 27 Aug 2021 23:11:06 +0000 (17:11 -0600)]
io_uring: support CLOCK_BOOTTIME/REALTIME for timeouts
Certain use cases want to use CLOCK_BOOTTIME or CLOCK_REALTIME rather than
CLOCK_MONOTONIC, instead of the default CLOCK_MONOTONIC.
Add an IORING_TIMEOUT_BOOTTIME and IORING_TIMEOUT_REALTIME flag that
allows timeouts and linked timeouts to use the selected clock source.
Only one clock source may be selected, and we -EINVAL the request if more
than one is given. If neither BOOTIME nor REALTIME are selected, the
previous default of MONOTONIC is used.
Jens Axboe [Fri, 27 Aug 2021 17:33:19 +0000 (11:33 -0600)]
io-wq: provide a way to limit max number of workers
io-wq divides work into two categories:
1) Work that completes in a bounded time, like reading from a regular file
or a block device. This type of work is limited based on the size of
the SQ ring.
2) Work that may never complete, we call this unbounded work. The amount
of workers here is just limited by RLIMIT_NPROC.
For various uses cases, it's handy to have the kernel limit the maximum
amount of pending workers for both categories. Provide a way to do with
with a new IORING_REGISTER_IOWQ_MAX_WORKERS operation.
IORING_REGISTER_IOWQ_MAX_WORKERS takes an array of two integers and sets
the max worker count to what is being passed in for each category. The
old values are returned into that same array. If 0 is being passed in for
either category, it simply returns the current value.
The value is capped at RLIMIT_NPROC. This actually isn't that important
as it's more of a hint, if we're exceeding the value then our attempt
to fork a new worker will fail. This happens naturally already if more
than one node is in the system, as these values are per-node internally
for io-wq.
Jens Axboe [Sun, 29 Aug 2021 12:50:58 +0000 (06:50 -0600)]
Merge tag 'floppy-for-5.15' of https://github.com/evdenis/linux-floppy into for-5.15/drivers
Pull floppy fix from Denis:
"Bring back O_NDELAY for floppy
Only one commit this time with revert of O_NDELAY removal for the floppy.
Users reported that the commit breaks userspace utils and known floppy
workflow patterns. We already reverted the same commit back in 2016
presumably for the same reason. Completely drop O_NDELAY for floppy seems
excessive to solve problems it introduces.
I started to write basic selftests for the floppy to prevent this kind of
userspace breaks in the future.
Signed-off-by: Denis Efremov <efremov@linux.com>"
* tag 'floppy-for-5.15' of https://github.com/evdenis/linux-floppy:
Revert "floppy: reintroduce O_NDELAY fix"
Denis Efremov [Sat, 7 Aug 2021 07:37:02 +0000 (10:37 +0300)]
Revert "floppy: reintroduce O_NDELAY fix"
The patch breaks userspace implementations (e.g. fdutils) and introduces
regressions in behaviour. Previously, it was possible to O_NDELAY open a
floppy device with no media inserted or with write protected media without
an error. Some userspace tools use this particular behavior for probing.
It's not the first time when we revert this patch. Previous revert is in
commit f2791e7eadf4 (Revert "floppy: refactor open() flags handling").
Jens Axboe [Fri, 27 Aug 2021 22:32:01 +0000 (16:32 -0600)]
Merge branch 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md into for-5.15/drivers
Pull MD changes from Song.
* 'md-next' of https://git.kernel.org/pub/scm/linux/kernel/git/song/md:
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard
Guoqing Jiang [Tue, 24 Aug 2021 01:16:54 +0000 (09:16 +0800)]
raid1: ensure write behind bio has less than BIO_MAX_VECS sectors
We can't split write behind bio with more than BIO_MAX_VECS sectors,
otherwise the below call trace was triggered because we could allocate
oversized write behind bio later.
Pavel Begunkov [Wed, 25 Aug 2021 19:51:40 +0000 (20:51 +0100)]
io_uring: add build check for buf_index overflows
req->buf_index is u16 and so we rely on registered buffers indexes
fitting into it. Add a build check, so when the upper limit for the
number of buffers is lifted we get a compliation fail but not lurking
problems.
Pavel Begunkov [Fri, 27 Aug 2021 10:55:01 +0000 (11:55 +0100)]
io_uring: add task-refs-get helper
As we have a more complicated task referencing, which apart from normal
task references includes taking tctx->inflight and caching all that, it
would be a good idea to have all that isolated in helpers.
Hao Xu [Fri, 27 Aug 2021 09:46:09 +0000 (17:46 +0800)]
io_uring: fix failed linkchain code logic
Given a linkchain like this:
req0(link_flag)-->req1(link_flag)-->...-->reqn(no link_flag)
There is a problem:
- if some intermediate linked req like req1 's submittion fails, reqs
after it won't be cancelled.
- sqpoll disabled: maybe it's ok since users can get the error info
of req1 and stop submitting the following sqes.
- sqpoll enabled: definitely a problem, the following sqes will be
submitted in the next round.
The solution is to refactor the code logic to:
- if a linked req's submittion fails, just mark it and the head(if it
exists) as REQ_F_FAIL. Leverage req->result to indicate whether it
is failed or cancelled.
- submit or fail the whole chain when we come to the end of it.
Xiao Ni [Wed, 18 Aug 2021 05:57:48 +0000 (13:57 +0800)]
md/raid10: Remove unnecessary rcu_dereference in raid10_handle_discard
We are seeing the following warning in raid10_handle_discard.
[ 695.110751] =============================
[ 695.131439] WARNING: suspicious RCU usage
[ 695.151389] 4.18.0-319.el8.x86_64+debug #1 Not tainted
[ 695.174413] -----------------------------
[ 695.192603] drivers/md/raid10.c:1776 suspicious
rcu_dereference_check() usage!
[ 695.225107] other info that might help us debug this:
[ 695.260940] rcu_scheduler_active = 2, debug_locks = 1
[ 695.290157] no locks held by mkfs.xfs/10186.
In the first loop of function raid10_handle_discard. It already
determines which disk need to handle discard request and add the
rdev reference count rdev->nr_pending. So the conf->mirrors will
not change until all bios come back from underlayer disks. It
doesn't need to use rcu_dereference to get rdev.
Cc: stable@vger.kernel.org Fixes: d30588b2731f ('md/raid10: improve raid10 discard request') Signed-off-by: Xiao Ni <xni@redhat.com> Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Song Liu <songliubraving@fb.com>
Jens Axboe [Wed, 25 Aug 2021 20:21:16 +0000 (14:21 -0600)]
Merge branch 'for-5.15/drivers' into for-next
* for-5.15/drivers:
nbd: remove nbd->destroy_complete
nbd: only return usable devices from nbd_find_unused
nbd: set nbd->index before releasing nbd_index_mutex
nbd: prevent IDR lookups from finding partially initialized devices
nbd: reset NBD to NULL when restarting in nbd_genl_connect
nbd: add missing locking to the nbd_dev_add error path
Christoph Hellwig [Wed, 25 Aug 2021 16:31:08 +0000 (18:31 +0200)]
nbd: remove nbd->destroy_complete
The nbd->destroy_complete pointer is not really needed. For creating
a device without a specific index we now simplify skip devices marked
NBD_DESTROY_ON_DISCONNECT as there is not much point to reuse them.
For device creation with a specific index there is no real need to
treat the case of a requested but not finished disconnect different
than any other device that is being shutdown, i.e. we can just return
an error, as a slightly different race window would anyway.
Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope") Reported-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Reported-by: syzbot+2c98885bcd769f56b6d6@syzkaller.appspotmail.com Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210825163108.50713-7-hch@lst.de Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Aug 2021 16:31:07 +0000 (18:31 +0200)]
nbd: only return usable devices from nbd_find_unused
Device marked as NBD_DESTROY_ON_DISCONNECT can and should be skipped
given that they won't survive the disconnect. So skip them and try
to grab a reference directly and just continue if the the devices
is being torn down or created and thus has a zero refcount.
Tetsuo Handa [Wed, 25 Aug 2021 16:31:05 +0000 (18:31 +0200)]
nbd: prevent IDR lookups from finding partially initialized devices
Previously nbd_index_mutex was held during whole add/remove/lookup
operations in order to guarantee that partially initialized devices are
not reachable via idr_find() or idr_for_each(). But now that partially
initialized devices become reachable as soon as idr_alloc() succeeds,
we need to skip partially initialized devices. Since it seems that
all functions use refcount_inc_not_zero(&nbd->refs) in order to skip
destroying devices, update nbd->refs from zero to non-zero as the last
step of device initialization in order to also skip partially initialized
devices.
Fixes: 6e4df4c64881 ("nbd: reduce the nbd_index_mutex scope") Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
[hch: split from a larger patch, added comments] Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210825163108.50713-4-hch@lst.de Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Wed, 25 Aug 2021 16:31:04 +0000 (18:31 +0200)]
nbd: reset NBD to NULL when restarting in nbd_genl_connect
When nbd_genl_connect restarts to wait for a disconnecting device, nbd
needs to be reset to NULL. Do that by facoring out a helper to find
an unused device.
Fixes: 6177b56c96ff ("nbd: refactor device search and allocation in nbd_genl_connect") Reported-by: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp> Reported-by: Hillf Danton <hdanton@sina.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210825163108.50713-3-hch@lst.de Reviewed-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Aug 2021 12:47:08 +0000 (06:47 -0600)]
Merge branch 'for-5.15/block' into for-next
* for-5.15/block:
sg: pass the device name to blk_trace_setup
block, bfq: cleanup the repeated declaration
blk-crypto: fix check for too-large dun_bytes
Christoph Hellwig [Wed, 25 Aug 2021 07:54:38 +0000 (09:54 +0200)]
sg: pass the device name to blk_trace_setup
Fix a regression that passed a NULL device name to blk_trace_setup
accidentally.
Fixes: aebbb5831fbd ("sg: do not allocate a gendisk") Reported-by: syzbot+f74aa89114a236643919@syzkaller.appspotmail.com Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com> Link: https://lore.kernel.org/r/20210825075438.1883687-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 25 Aug 2021 12:37:32 +0000 (06:37 -0600)]
Merge branch 'io_uring-bio-cache.5' into for-next
* io_uring-bio-cache.5:
bio: improve kerneldoc documentation for bio_alloc_kiocb()
block: provide bio_clear_hipri() helper
block: use the percpu bio cache in __blkdev_direct_IO
io_uring: enable use of bio alloc cache
block: clear BIO_PERCPU_CACHE flag if polling isn't supported
bio: add allocation cache abstraction
fs: add kiocb alloc cache flag
bio: optimize initialization of a bio
Jens Axboe [Wed, 25 Aug 2021 12:37:29 +0000 (06:37 -0600)]
Merge branch 'for-5.15/io_uring-vfs' into for-next
* for-5.15/io_uring-vfs:
io_uring: add support for IORING_OP_LINKAT
io_uring: add support for IORING_OP_SYMLINKAT
io_uring: add support for IORING_OP_MKDIRAT
namei: update do_*() helpers to return ints
namei: make do_linkat() take struct filename
namei: add getname_uflags()
namei: make do_symlinkat() take struct filename
namei: make do_mknodat() take struct filename
namei: make do_mkdirat() take struct filename
namei: change filename_parentat() calling conventions
namei: ignore ERR/NULL names in putname()
Pavel Begunkov [Wed, 25 Aug 2021 11:25:47 +0000 (12:25 +0100)]
io_uring: accept directly into fixed file table
As done with open opcodes, allow accept to skip installing fd into
processes' file tables and put it directly into io_uring's fixed file
table. Same restrictions and design as for open.
Pavel Begunkov [Wed, 25 Aug 2021 11:25:45 +0000 (12:25 +0100)]
io_uring: openat directly into fixed fd table
Instead of opening a file into a process's file table as usual and then
registering the fd within io_uring, some users may want to skip the
first step and place it directly into io_uring's fixed file table.
This patch adds such a capability for IORING_OP_OPENAT and
IORING_OP_OPENAT2.
The behaviour is controlled by setting sqe->file_index, where 0 implies
the old behaviour using normal file tables. If non-zero value is
specified, then it will behave as described and place the file into a
fixed file slot sqe->file_index - 1. A file table should be already
created, the slot should be valid and empty, otherwise the operation
will fail.
Keep the error codes consistent with IORING_OP_FILES_UPDATE, ENXIO and
EINVAL on inappropriate fixed tables, and return EBADF on collision with
already registered file.
Note: IOSQE_FIXED_FILE can't be used to switch between modes, because
accept takes a file, and it already uses the flag with a different
meaning.
Pavel Begunkov [Wed, 25 Aug 2021 11:25:44 +0000 (12:25 +0100)]
net: add accept helper not installing fd
Introduce and reuse a helper that acts similarly to __sys_accept4_file()
but returns struct file instead of installing file descriptor. Will be
used by io_uring.
Niklas Cassel [Wed, 11 Aug 2021 11:05:19 +0000 (11:05 +0000)]
blk-zoned: allow BLKREPORTZONE without CAP_SYS_ADMIN
A user space process should not need the CAP_SYS_ADMIN capability set
in order to perform a BLKREPORTZONE ioctl.
Getting the zone report is required in order to get the write pointer.
Neither read() nor write() requires CAP_SYS_ADMIN, so it is reasonable
that a user space process that can read/write from/to the device, also
can get the write pointer. (Since e.g. writes have to be at the write
pointer.)
Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls") Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Aravind Ramesh <aravind.ramesh@wdc.com> Reviewed-by: Adam Manzanares <a.manzanares@samsung.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: stable@vger.kernel.org # v4.10+ Link: https://lore.kernel.org/r/20210811110505.29649-3-Niklas.Cassel@wdc.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Niklas Cassel [Wed, 11 Aug 2021 11:05:18 +0000 (11:05 +0000)]
blk-zoned: allow zone management send operations without CAP_SYS_ADMIN
Zone management send operations (BLKRESETZONE, BLKOPENZONE, BLKCLOSEZONE
and BLKFINISHZONE) should be allowed under the same permissions as write().
(write() does not require CAP_SYS_ADMIN).
Additionally, other ioctls like BLKSECDISCARD and BLKZEROOUT only check if
the fd was successfully opened with FMODE_WRITE.
(They do not require CAP_SYS_ADMIN).
Currently, zone management send operations require both CAP_SYS_ADMIN
and that the fd was successfully opened with FMODE_WRITE.
Remove the CAP_SYS_ADMIN requirement, so that zone management send
operations match the access control requirement of write(), BLKSECDISCARD
and BLKZEROOUT.
Fixes: 3ed05a987e0f ("blk-zoned: implement ioctls") Signed-off-by: Niklas Cassel <niklas.cassel@wdc.com> Reviewed-by: Damien Le Moal <damien.lemoal@wdc.com> Reviewed-by: Aravind Ramesh <aravind.ramesh@wdc.com> Reviewed-by: Adam Manzanares <a.manzanares@samsung.com> Reviewed-by: Himanshu Madhani <himanshu.madhani@oracle.com> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Cc: stable@vger.kernel.org # v4.10+ Link: https://lore.kernel.org/r/20210811110505.29649-2-Niklas.Cassel@wdc.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
Christoph Hellwig [Tue, 24 Aug 2021 14:43:10 +0000 (16:43 +0200)]
block: refine the disk_live check in del_gendisk
hidden gendisks will never be marked live.
Fixes: 40b3a52ffc5b ("block: add a sanity check for a live disk in del_gendisk") Reported-by: Bruno Goncalves <bgoncalv@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20210824144310.1487816-1-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:35 +0000 (03:45 +0300)]
mmc: block: Support alternative_gpt_sector() operation
Support generic alternative_gpt_sector() block device operation.
It calculates location of GPT entry for eMMC of NVIDIA Tegra Android
devices. Add new MMC_CAP2_ALT_GPT_TEGRA flag that enables scanning of
alternative GPT sector and add raw_boot_mult field to mmc_ext_csd
which allows to get size of the boot partitions that is needed for
the calculation.
Dmitry Osipenko [Fri, 20 Aug 2021 00:45:33 +0000 (03:45 +0300)]
block: Add alternative_gpt_sector() operation
Add alternative_gpt_sector() block device operation which specifies
alternative location of a GPT entry. This allows us to support Android
devices that have GPT entry at a non-standard location and can't be
repartitioned easily.
Pavel Begunkov [Mon, 19 Jul 2021 10:53:00 +0000 (11:53 +0100)]
bio: fix page leak bio_add_hw_page failure
__bio_iov_append_get_pages() doesn't put not appended pages on
bio_add_hw_page() failure, so potentially leaking them, fix it. Also, do
the same for __bio_iov_iter_get_pages(), even though it looks like it
can't be triggered by userspace in this case.
IORING_OP_LINKAT behaves like linkat(2) and takes the same flags and
arguments.
In some internal places 'hardlink' is used instead of 'link' to avoid
confusion with the SQE links. Name 'link' conflicts with the existing
'link' member of io_kiocb.
Jens Axboe [Mon, 23 Aug 2021 19:47:12 +0000 (13:47 -0600)]
Merge branch 'for-5.15/drivers' into for-next
* for-5.15/drivers: (33 commits)
nvme: remove the unused NVME_NS_* enum
nvme: remove nvm_ndev from ns
nvme: Have NVME_FABRICS select NVME_CORE instead of transport drivers
block: nbd: add sanity check for first_minor
nvmet: check that host sqsize does not exceed ctrl MQES
nvmet: avoid duplicate qid in connect cmd
nvmet: pass back cntlid on successful completion
nvme-rdma: don't update queue count when failing to set io queues
nvme-tcp: don't update queue count when failing to set io queues
nvme-tcp: pair send_mutex init with destroy
nvme: allow user toggling hmb usage
nvme-pci: disable hmb on idle suspend
nvmet: remove redundant assignments of variable status
nvmet: add set feature tracing support
nvme: add set feature tracing support
nvme-fabrics: remove superfluous nvmf_host_put in nvmf_parse_options
nvme-pci: cmb sysfs: one file, one value
nvme-pci: use attribute group for cmb sysfs
nvme: code command_id with a genctr for use-after-free validation
nvme-tcp: don't check blk_mq_tag_to_rq when receiving pdu data
...
Jens Axboe [Mon, 23 Aug 2021 19:46:55 +0000 (13:46 -0600)]
Merge branch 'for-5.15/block' into for-next
* for-5.15/block: (115 commits)
null_blk: add error handling support for add_disk()
virtio_blk: add error handling support for add_disk()
block: add error handling for device_add_disk / add_disk
block: return errors from disk_alloc_events
block: return errors from blk_integrity_add
block: call blk_register_queue earlier in device_add_disk
block: call blk_integrity_add earlier in device_add_disk
block: create the bdi link earlier in device_add_disk
block: call bdev_add later in device_add_disk
block: fold register_disk into device_add_disk
block: add a sanity check for a live disk in del_gendisk
block: add an explicit ->disk backpointer to the request_queue
block: hold a request_queue reference for the lifetime of struct gendisk
block: pass a request_queue to __blk_alloc_disk
block: remove the minors argument to __alloc_disk_node
block: remove alloc_disk and alloc_disk_node
block: cleanup the lockdep handling in *alloc_disk
sg: do not allocate a gendisk
st: do not allocate a gendisk
nvme: use blk_mq_alloc_disk
...
Jens Axboe [Fri, 13 Aug 2021 13:53:09 +0000 (07:53 -0600)]
bio: improve kerneldoc documentation for bio_alloc_kiocb()
We're missing a description for the 'nr_vecs' parameter. While in there,
clarify that freeing a bio allocated through this function must be done
from process context.
Jens Axboe [Thu, 12 Aug 2021 17:42:53 +0000 (11:42 -0600)]
block: provide bio_clear_hipri() helper
Any case that turns off REQ_HIPRI must also clear BIO_PERCPU_CACHE,
as non-polled IO may complete through hard/soft IRQ and hence isn't
safe for our polled bio alloc cache.
Provide a helper that does just that, and use it in the merging code as
well if we split a bio and turn off polling.
Fixes: be863b9e4348 ("block: clear BIO_PERCPU_CACHE flag if polling isn't supported") Reported-by: Keith Busch <kbusch@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 11 Aug 2021 16:19:06 +0000 (10:19 -0600)]
block: clear BIO_PERCPU_CACHE flag if polling isn't supported
The bio alloc cache relies on the fact that a polled bio will complete
in process context, clear the cacheable flag if we disable polling
for a given bio.
Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Mon, 8 Mar 2021 18:37:47 +0000 (11:37 -0700)]
bio: add allocation cache abstraction
Add a per-cpu bio_set cache for bio allocations, enabling us to quickly
recycle them instead of going through the slab allocator. This cache
isn't IRQ safe, and hence is only really suitable for polled IO.
Very simple - keeps a count of bio's in the cache, and maintains a max
of 512 with a slack of 64. If we get above max + slack, we drop slack
number of bio's.
Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Tue, 10 Aug 2021 15:29:55 +0000 (09:29 -0600)]
fs: add kiocb alloc cache flag
If this kiocb can safely use the polled bio allocation cache, then this
flag must be set. Generally this can be set for polled IO, where we will
not see IRQ completions of the request.
Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Jens Axboe [Wed, 11 Aug 2021 15:20:04 +0000 (09:20 -0600)]
bio: optimize initialization of a bio
The memset() used is measurably slower in targeted benchmarks, wasting
about 1% of the total runtime, or 50% of the (later) hot path cached
bio alloc. Get rid of it and fill in the bio manually.
Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
Pavel Begunkov [Mon, 23 Aug 2021 12:30:44 +0000 (13:30 +0100)]
io_uring: fix io_try_cancel_userdata race for iowq
WARNING: CPU: 1 PID: 5870 at fs/io_uring.c:5975 io_try_cancel_userdata+0x30f/0x540 fs/io_uring.c:5975
CPU: 0 PID: 5870 Comm: iou-wrk-5860 Not tainted 5.14.0-rc6-next-20210820-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
RIP: 0010:io_try_cancel_userdata+0x30f/0x540 fs/io_uring.c:5975
Call Trace:
io_async_cancel fs/io_uring.c:6014 [inline]
io_issue_sqe+0x22d5/0x65a0 fs/io_uring.c:6407
io_wq_submit_work+0x1dc/0x300 fs/io_uring.c:6511
io_worker_handle_work+0xa45/0x1840 fs/io-wq.c:533
io_wqe_worker+0x2cc/0xbb0 fs/io-wq.c:582
ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
io_try_cancel_userdata() can be called from io_async_cancel() executing
in the io-wq context, so the warning fires, which is there to alert
anyone accessing task->io_uring->io_wq in a racy way. However,
io_wq_put_and_exit() always first waits for all threads to complete,
so the only detail left is to zero tctx->io_wq after the context is
removed.
note: one little assumption is that when IO_WQ_WORK_CANCEL, the executor
won't touch ->io_wq, because io_wq_destroy() might cancel left pending
requests in such a way.
There are a couple of places where we already open-code the (flags &
AT_EMPTY_PATH) check and io_uring will likely add another one in the
future. Let's just add a simple helper getname_uflags() that handles
this directly and use it.
Pass in the struct filename pointers instead of the user string, for
uniformity with the recently converted do_mkdnodat(), do_unlinkat(),
do_renameat(), do_mkdirat().
Pass in the struct filename pointers instead of the user string, for
uniformity with the recently converted do_unlinkat(), do_renameat(),
do_mkdirat().
Pass in the struct filename pointers instead of the user string, and
update the three callers to do the same. This is heavily based on
commit dbea8d345177 ("fs: make do_renameat2() take struct filename").
This behaves like do_unlinkat() and do_renameat2().
Since commit 5c31b6cedb675 ("namei: saner calling conventions for
filename_parentat()") filename_parentat() had the following behavior WRT
the passed in struct filename *:
* On error the name is consumed (putname() is called on it);
* On success the name is returned back as the return value;
Now there is a need for filename_create() and filename_lookup() variants
that do not consume the passed filename, and following the same "consume
the name only on error" semantics is proven to be hard to reason about
and result in confusing code.
Hence this preparation change splits filename_parentat() into two: one
that always consumes the name and another that never consumes the name.
This will allow to implement two filename_create() variants in the same
way, and is a consistent and hopefully easier to reason about approach.
Pavel Begunkov [Wed, 18 Aug 2021 11:42:47 +0000 (12:42 +0100)]
io_uring: IRQ rw completion batching
Employ inline completion logic for read/write completions done via
io_req_task_complete(). If ->uring_lock is contended, just do normal
request completion, but if not, make tctx_task_work() to grab the lock
and do batched inline completions in io_req_task_complete().
Pavel Begunkov [Wed, 18 Aug 2021 11:42:46 +0000 (12:42 +0100)]
io_uring: batch task work locking
Many task_work handlers either grab ->uring_lock, or may benefit from
having it. Move locking logic out of individual handlers to a lazy
approach controlled by tctx_task_work(), so we don't keep doing
tons of mutex lock/unlock.
Pavel Begunkov [Wed, 18 Aug 2021 11:42:45 +0000 (12:42 +0100)]
io_uring: flush completions for fallbacks
io_fallback_req_func() doesn't expect anyone creating inline
completions, and no one currently does that. Teach the function to flush
completions preparing for further changes.
Pavel Begunkov [Fri, 20 Aug 2021 09:36:37 +0000 (10:36 +0100)]
io_uring: add ->splice_fd_in checks
->splice_fd_in is used only by splice/tee, but no other request checks
it for validity. Add the check for most of request types excluding
reads/writes/sends/recvs, we don't want overhead for them and can leave
them be as is until the field is actually used.
Jens Axboe [Sat, 21 Aug 2021 13:21:19 +0000 (07:21 -0600)]
io_uring: add clarifying comment for io_cqring_ev_posted()
We've previously had an issue where overflow flush unconditionally calls
io_cqring_ev_posted() even if it didn't flush any events to the ring,
causing wake and eventfd increment where no new events are available.
Some applications don't like that, see commit b18032bb0a88 for details.
This came up in discussion for another patch recently, hence add a
comment detailing what the relationship between calling the events
posted helper and CQ ring entries is.
wangyangbo [Thu, 19 Aug 2021 05:56:57 +0000 (13:56 +0800)]
io_uring: Add register support for non-4k PAGE_SIZE
Now allocated rsrc table uses PAGE_SIZE as the size of 2nd-level, and
accessing this table relies on each level index from fixed TABLE_SHIFT
(12 - 3) in 4k page case. In order to correctly work in non-4k page,
define TABLE_SHIFT as non-fixed (PAGE_SHIFT - shift of data) for
2nd-level table entry number.