]> www.infradead.org Git - users/hch/block.git/log
users/hch/block.git
3 years agoblock: fix error unwinding in device_add_disk add_disk-fix
Christoph Hellwig [Tue, 21 Dec 2021 10:03:39 +0000 (11:03 +0100)]
block: fix error unwinding in device_add_disk

One device_add is called disk->ev will be freed by disk_release, so we
should free it twice.  Fix this by allocating disk->ev after device_add
so that the extra local unwinding can be removed entirely.

Based on an earlier patch from Tetsuo Handa.

Reported-by: syzbot <syzbot+28a66a9fbc621c939000@syzkaller.appspotmail.com>
Fixes: 83cbce9574462c6b ("block: add error handling for device_add_disk / add_disk")
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agoMerge branch 'for-5.17/drivers' into for-next
Jens Axboe [Fri, 17 Dec 2021 16:51:05 +0000 (09:51 -0700)]
Merge branch 'for-5.17/drivers' into for-next

* for-5.17/drivers:
  block: remove the rsxx driver
  rsxx: Drop PCI legacy power management
  mtip32xx: convert to generic power management
  mtip32xx: remove pointless drvdata lookups
  mtip32xx: remove pointless drvdata checking
  drbd: Use struct_group() to zero algs
  loop: make autoclear operation asynchronous
  null_blk: cast command status to integer
  pktdvd: stop using bdi congestion framework.

3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Fri, 17 Dec 2021 16:51:03 +0000 (09:51 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block: (23 commits)
  block: only build the icq tracking code when needed
  block: fold create_task_io_context into ioc_find_get_icq
  block: open code create_task_io_context in set_task_ioprio
  block: fold get_task_io_context into set_task_ioprio
  block: move set_task_ioprio to blk-ioc.c
  block: cleanup ioc_clear_queue
  block: refactor put_io_context
  block: remove the NULL ioc check in put_io_context
  block: refactor put_iocontext_active
  block: simplify struct io_context refcounting
  block: remove the nr_task field from struct io_context
  nvme: add support for mq_ops->queue_rqs()
  nvme: separate command prep and issue
  nvme: split command copy into a helper
  block: add mq_ops->queue_rqs hook
  block: use singly linked list for bio cache
  block: add completion handler for fast path
  block: make queue stat accounting a reference
  bdev: Improve lookup_bdev documentation
  mtd_blkdevs: don't scan partitions for plain mtdblock
  ...

3 years agoMerge branch 'for-5.17/io_uring' into for-next
Jens Axboe [Fri, 17 Dec 2021 16:51:00 +0000 (09:51 -0700)]
Merge branch 'for-5.17/io_uring' into for-next

* for-5.17/io_uring:
  io_uring: code clean for some ctx usage
  io_uring: batch completion in prior_task_list
  io_uring: split io_req_complete_post() and add a helper
  io_uring: add helper for task work execution code
  io_uring: add a priority tw list for irq completion work
  io-wq: add helper to merge two wq_lists

3 years agoblock: only build the icq tracking code when needed
Christoph Hellwig [Thu, 9 Dec 2021 06:31:31 +0000 (07:31 +0100)]
block: only build the icq tracking code when needed

Only bfq needs to code to track icq, so make it conditional.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-12-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: fold create_task_io_context into ioc_find_get_icq
Christoph Hellwig [Thu, 9 Dec 2021 06:31:30 +0000 (07:31 +0100)]
block: fold create_task_io_context into ioc_find_get_icq

Fold create_task_io_context into the only remaining caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-11-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: open code create_task_io_context in set_task_ioprio
Christoph Hellwig [Thu, 9 Dec 2021 06:31:29 +0000 (07:31 +0100)]
block: open code create_task_io_context in set_task_ioprio

The flow in set_task_ioprio can be simplified by simply open coding
create_task_io_context, which removes a refcount roundtrip on the I/O
context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-10-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: fold get_task_io_context into set_task_ioprio
Christoph Hellwig [Thu, 9 Dec 2021 06:31:28 +0000 (07:31 +0100)]
block: fold get_task_io_context into set_task_ioprio

Fold get_task_io_context into its only caller, and simplify the code
as no reference to the I/O context is required to just set the ioprio
field.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: move set_task_ioprio to blk-ioc.c
Christoph Hellwig [Thu, 9 Dec 2021 06:31:27 +0000 (07:31 +0100)]
block: move set_task_ioprio to blk-ioc.c

Keep set_task_ioprio with the other low-level code that accesses the
io_context structure.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: cleanup ioc_clear_queue
Christoph Hellwig [Thu, 9 Dec 2021 06:31:26 +0000 (07:31 +0100)]
block: cleanup ioc_clear_queue

Fold __ioc_clear_queue into ioc_clear_queue and switch to always
use plain _irq locking instead of the more expensive _irqsave that
is not needed here.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-7-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: refactor put_io_context
Christoph Hellwig [Thu, 9 Dec 2021 06:31:25 +0000 (07:31 +0100)]
block: refactor put_io_context

Move the code to delay freeing the icqs into a separate helper.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove the NULL ioc check in put_io_context
Christoph Hellwig [Thu, 9 Dec 2021 06:31:24 +0000 (07:31 +0100)]
block: remove the NULL ioc check in put_io_context

No caller passes in a NULL pointer, so remove the check.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: refactor put_iocontext_active
Christoph Hellwig [Thu, 9 Dec 2021 06:31:23 +0000 (07:31 +0100)]
block: refactor put_iocontext_active

Factor out a ioc_exit_icqs helper to tear down the icqs and the fold
the rest of put_iocontext_active into exit_io_context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: simplify struct io_context refcounting
Christoph Hellwig [Thu, 9 Dec 2021 06:31:22 +0000 (07:31 +0100)]
block: simplify struct io_context refcounting

Don't hold a reference to ->refcount for each active reference, but
just one for all active references.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove the nr_task field from struct io_context
Christoph Hellwig [Thu, 9 Dec 2021 06:31:21 +0000 (07:31 +0100)]
block: remove the nr_task field from struct io_context

Nothing ever looks at ->nr_tasks, so remove it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211209063131.18537-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove the rsxx driver
Christoph Hellwig [Thu, 16 Dec 2021 08:42:44 +0000 (09:42 +0100)]
block: remove the rsxx driver

This driver was for rare and shortlived high end enterprise hardware
and hasn't been maintained since 2014, which also means it never got
converted to use blk-mq.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonvme: add support for mq_ops->queue_rqs()
Jens Axboe [Thu, 18 Nov 2021 15:37:30 +0000 (08:37 -0700)]
nvme: add support for mq_ops->queue_rqs()

This enables the block layer to send us a full plug list of requests
that need submitting. The block layer guarantees that they all belong
to the same queue, but we do have to check the hardware queue mapping
for each request.

If errors are encountered, leave them in the passed in list. Then the
block layer will handle them individually.

This is good for about a 4% improvement in peak performance, taking us
from 9.6M to 10M IOPS/core.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonvme: separate command prep and issue
Jens Axboe [Fri, 29 Oct 2021 20:34:11 +0000 (14:34 -0600)]
nvme: separate command prep and issue

Add a nvme_prep_rq() helper to setup a command, and nvme_queue_rq() is
adapted to use this helper.

Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonvme: split command copy into a helper
Jens Axboe [Fri, 29 Oct 2021 20:32:44 +0000 (14:32 -0600)]
nvme: split command copy into a helper

We'll need it for batched submit as well. Since we now have a copy
helper, get rid of the nvme_submit_cmd() wrapper.

Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Max Gurtovoy <mgurtovoy@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: add mq_ops->queue_rqs hook
Jens Axboe [Fri, 3 Dec 2021 13:48:53 +0000 (06:48 -0700)]
block: add mq_ops->queue_rqs hook

If we have a list of requests in our plug list, send it to the driver in
one go, if possible. The driver must set mq_ops->queue_rqs() to support
this, if not the usual one-by-one path is used.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: use singly linked list for bio cache
Jens Axboe [Wed, 1 Dec 2021 23:19:18 +0000 (16:19 -0700)]
block: use singly linked list for bio cache

Pointless to maintain a head/tail for the list, as we never need to
access the tail. Entries are always LIFO for cache hotness reasons.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: add completion handler for fast path
Jens Axboe [Wed, 1 Dec 2021 22:01:51 +0000 (15:01 -0700)]
block: add completion handler for fast path

The batched completions only deal with non-partial requests anyway,
and it doesn't deal with any requests that have errors. Add a completion
handler that assumes it's a full request and that it's all being ended
successfully.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: make queue stat accounting a reference
Jens Axboe [Wed, 15 Dec 2021 00:23:05 +0000 (17:23 -0700)]
block: make queue stat accounting a reference

kyber turns on IO statistics when it is loaded on a queue, which means
that even if kyber is then later unloaded, we're still stuck with stats
enabled on the queue.

Change the account enabled from a bool to an int, and pair the enable call
with the equivalent disable call. This ensures that stats gets turned off
again appropriately.

Reviewed-by: Omar Sandoval <osandov@fb.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agorsxx: Drop PCI legacy power management
Bjorn Helgaas [Wed, 8 Dec 2021 19:24:49 +0000 (13:24 -0600)]
rsxx: Drop PCI legacy power management

The rsxx driver doesn't support device suspend, so remove
rsxx_pci_suspend(), the legacy PCI .suspend() method, completely.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20211208192449.146076-5-helgaas@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomtip32xx: convert to generic power management
Vaibhav Gupta [Wed, 8 Dec 2021 19:24:48 +0000 (13:24 -0600)]
mtip32xx: convert to generic power management

Convert mtip32xx from legacy PCI power management to the generic power
management framework.

Previously, mtip32xx used legacy PCI power management, where
mtip_pci_suspend() and mtip_pci_resume() were responsible for both
device-specific things and generic PCI things:

  mtip_pci_suspend
    mtip_block_suspend(dd)              <-- device-specific
    pci_save_state(pdev)                <-- generic PCI
    pci_set_power_state(pdev, pci_choose_state(pdev, state))

  mtip_pci_resume
    pci_set_power_state(PCI_D0)         <-- generic PCI
    pci_restore_state(pdev)             <-- generic PCI
    pcim_enable_device(pdev)            <-- generic PCI
    pci_set_master(pdev)                <-- generic PCI
    mtip_block_resume(dd)               <-- device-specific

With generic power management, the PCI bus PM methods do the generic PCI
things, and the driver needs only the device-specific part, i.e.,

  suspend_devices_and_enter
    dpm_suspend_start(PMSG_SUSPEND)
      pci_pm_suspend                    # PCI bus .suspend() method
        mtip_pci_suspend                # dev->driver->pm->suspend
          mtip_block_suspend            <-- device-specific
    suspend_enter
      dpm_suspend_noirq(PMSG_SUSPEND)
        pci_pm_suspend_noirq            # PCI bus .suspend_noirq() method
          pci_save_state                <-- generic PCI
          pci_prepare_to_sleep          <-- generic PCI
            pci_set_power_state
    ...
    dpm_resume_end(PMSG_RESUME)
      pci_pm_resume                     # PCI bus .resume() method
        pci_restore_standard_config
          pci_set_power_state(PCI_D0)   <-- generic PCI
          pci_restore_state             <-- generic PCI
        mtip_pci_resume                 # dev->driver->pm->resume
          mtip_block_resume             <-- device-specific

[bhelgaas: commit log]

Link: https://lore.kernel.org/r/20210114115423.52414-2-vaibhavgupta40@gmail.com
Signed-off-by: Vaibhav Gupta <vaibhavgupta40@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20211208192449.146076-4-helgaas@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomtip32xx: remove pointless drvdata lookups
Bjorn Helgaas [Wed, 8 Dec 2021 19:24:47 +0000 (13:24 -0600)]
mtip32xx: remove pointless drvdata lookups

Previously we passed a struct pci_dev * to mtip_check_surprise_removal(),
which immediately looked up the driver_data.  But all callers already have
the driver_data pointer, so just pass it directly and skip the extra
lookup.  No functional change intended.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20211208192449.146076-3-helgaas@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomtip32xx: remove pointless drvdata checking
Bjorn Helgaas [Wed, 8 Dec 2021 19:24:46 +0000 (13:24 -0600)]
mtip32xx: remove pointless drvdata checking

The .suspend() and .resume() methods are only called after the .probe()
method (mtip_pci_probe()) has set the drvdata and returned success.

Therefore, if we get to mtip_pci_suspend() or mtip_pci_resume(), the
drvdata must be valid.  Drop the unnecessary checking.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://lore.kernel.org/r/20211208192449.146076-2-helgaas@kernel.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: code clean for some ctx usage
Hao Xu [Tue, 14 Dec 2021 05:59:04 +0000 (13:59 +0800)]
io_uring: code clean for some ctx usage

There are some functions doing ctx = req->ctx while still using
req->ctx, update those places.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211214055904.61772-1-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agodrbd: Use struct_group() to zero algs
Kees Cook [Thu, 18 Nov 2021 20:37:12 +0000 (12:37 -0800)]
drbd: Use struct_group() to zero algs

In preparation for FORTIFY_SOURCE performing compile-time and run-time
field bounds checking for memset(), avoid intentionally writing across
neighboring fields.

Add a struct_group() for the algs so that memset() can correctly reason
about the size.

Signed-off-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Link: https://lore.kernel.org/r/20211118203712.1288866-1-keescook@chromium.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoloop: make autoclear operation asynchronous
Tetsuo Handa [Mon, 13 Dec 2021 12:55:27 +0000 (21:55 +0900)]
loop: make autoclear operation asynchronous

syzbot is reporting circular locking problem at __loop_clr_fd() [1], for
commit 87579e9b7d8dc36e ("loop: use worker per cgroup instead of kworker")
is calling destroy_workqueue() with disk->open_mutex held.

This circular dependency cannot be broken unless we call __loop_clr_fd()
without holding disk->open_mutex. Therefore, defer __loop_clr_fd() from
lo_release() to a WQ context.

Link: https://syzkaller.appspot.com/bug?extid=643e4ce4b6ad1347d372
Reported-by: syzbot <syzbot+643e4ce4b6ad1347d372@syzkaller.appspotmail.com>
Suggested-by: Christoph Hellwig <hch@infradead.org>
Cc: Jan Kara <jack@suse.cz>
Tested-by: syzbot+643e4ce4b6ad1347d372@syzkaller.appspotmail.com
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/1ed7df28-ebd6-71fb-70e5-1c2972e05ddb@i-love.sakura.ne.jp
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobdev: Improve lookup_bdev documentation
Matthew Wilcox (Oracle) [Mon, 13 Dec 2021 17:11:13 +0000 (17:11 +0000)]
bdev: Improve lookup_bdev documentation

Add a Context section and rewrite the rest to be clearer.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Nikolay Borisov <nborisov@suse.com>
Link: https://lore.kernel.org/r/20211213171113.3097631-1-willy@infradead.org
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomtd_blkdevs: don't scan partitions for plain mtdblock
Christoph Hellwig [Mon, 6 Dec 2021 07:04:09 +0000 (08:04 +0100)]
mtd_blkdevs: don't scan partitions for plain mtdblock

mtdblock / mtdblock_ro set part_bits to 0 and thus nevever scanned
partitions.  Restore that behavior by setting the GENHD_FL_NO_PART flag.

Fixes: 1ebe2e5f9d68e94c ("block: remove GENHD_FL_EXT_DEVT")
Reported-by: Geert Uytterhoeven <geert+renesas@glider.be>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Link: https://lore.kernel.org/r/20211206070409.2836165-1-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonull_blk: cast command status to integer
Jens Axboe [Fri, 10 Dec 2021 23:32:44 +0000 (16:32 -0700)]
null_blk: cast command status to integer

kernel test robot reports that sparse now triggers a warning on null_blk:

>> drivers/block/null_blk/main.c:1577:55: sparse: sparse: incorrect type in argument 3 (different base types) @@     expected int ioerror @@     got restricted blk_status_t [usertype] error @@
   drivers/block/null_blk/main.c:1577:55: sparse:     expected int ioerror
   drivers/block/null_blk/main.c:1577:55: sparse:     got restricted blk_status_t [usertype] error

because blk_mq_add_to_batch() takes an integer instead of a blk_status_t.
Just cast this to an integer to silence it, null_blk is the odd one out
here since the command status is the "right" type. If we change the
function type, then we'll have do that for other callers too (existing and
future ones).

Fixes: 2385ebf38f94 ("block: null_blk: batched complete poll requests")
Reported-by: kernel test robot <lkp@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agopktdvd: stop using bdi congestion framework.
NeilBrown [Fri, 10 Dec 2021 04:31:56 +0000 (21:31 -0700)]
pktdvd: stop using bdi congestion framework.

The bdi congestion framework isn't widely used and should be
deprecated.

pktdvd makes use of it to track congestion, but this can be done
entirely internally to pktdvd, so it doesn't need to use the framework.

So introduce a "congested" flag.  When waiting for bio_queue_size to
drop, set this flag and a var_waitqueue() to wait for it.  When
bio_queue_size does drop and this flag is set, clear the flag and call
wake_up_var().

We don't use a wait_var_event macro for the waiting as we need to set
the flag and drop the spinlock before calling schedule() and while that
is possible with __wait_var_event(), result is not easy to read.

Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: NeilBrown <neilb@suse.de>
Link: https://lore.kernel.org/r/163910843527.9928.857338663717630212@noble.neil.brown.name
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: batch completion in prior_task_list
Hao Xu [Wed, 8 Dec 2021 05:21:25 +0000 (13:21 +0800)]
io_uring: batch completion in prior_task_list

In previous patches, we have already gathered some tw with
io_req_task_complete() as callback in prior_task_list, let's complete
them in batch while we cannot grab uring lock. In this way, we batch
the req_complete_post path.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211208052125.351587-1-haoxu@linux.alibaba.com
Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: split io_req_complete_post() and add a helper
Hao Xu [Tue, 7 Dec 2021 09:39:50 +0000 (17:39 +0800)]
io_uring: split io_req_complete_post() and add a helper

Split io_req_complete_post(), this is a prep for the next patch.

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211207093951.247840-5-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add helper for task work execution code
Hao Xu [Tue, 7 Dec 2021 09:39:49 +0000 (17:39 +0800)]
io_uring: add helper for task work execution code

Add a helper for task work execution code. We will use it later.

Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211207093951.247840-4-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add a priority tw list for irq completion work
Hao Xu [Tue, 7 Dec 2021 09:39:48 +0000 (17:39 +0800)]
io_uring: add a priority tw list for irq completion work

Now we have a lot of task_work users, some are just to complete a req
and generate a cqe. Let's put the work to a new tw list which has a
higher priority, so that it can be handled quickly and thus to reduce
avg req latency and users can issue next round of sqes earlier.
An explanatory case:

origin timeline:
    submit_sqe-->irq-->add completion task_work
    -->run heavy work0~n-->run completion task_work
now timeline:
    submit_sqe-->irq-->add completion task_work
    -->run completion task_work-->run heavy work0~n

Limitation: this optimization is only for those that submission and
reaping process are in different threads. Otherwise anyhow we have to
submit new sqes after returning to userspace, then the order of TWs
doesn't matter.

Tested this patch(and the following ones) by manually replace
__io_queue_sqe() in io_queue_sqe() by io_req_task_queue() to construct
'heavy' task works. Then test with fio:

ioengine=io_uring
sqpoll=1
thread=1
bs=4k
direct=1
rw=randread
time_based=1
runtime=600
randrepeat=0
group_reporting=1
filename=/dev/nvme0n1

Tried various iodepth.
The peak IOPS for this patch is 710K, while the old one is 665K.
For avg latency, difference shows when iodepth grow:
depth and avg latency(usec):
depth      new          old
 1        7.05         7.10
 2        8.47         8.60
 4        10.42        10.42
 8        13.78        13.22
 16       27.41        24.33
 32       49.40        53.08
 64       102.53       103.36
 128      196.98       205.61
 256      372.99       414.88
         512      747.23       791.30
         1024     1472.59      1538.72
         2048     3153.49      3329.01
         4096     6387.86      6682.54
         8192     12150.25     12774.14
         16384    23085.58     26044.71

Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/20211207093951.247840-3-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio-wq: add helper to merge two wq_lists
Hao Xu [Tue, 7 Dec 2021 09:39:47 +0000 (17:39 +0800)]
io-wq: add helper to merge two wq_lists

add a helper to merge two wq_lists, it will be useful in the next
patches.

Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Link: https://lore.kernel.org/r/20211207093951.247840-2-haoxu@linux.alibaba.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: Optimise blk_mq_queue_tag_busy_iter() for shared tags
John Garry [Mon, 6 Dec 2021 12:49:50 +0000 (20:49 +0800)]
blk-mq: Optimise blk_mq_queue_tag_busy_iter() for shared tags

Kashyap reports high CPU usage in blk_mq_queue_tag_busy_iter() and callees
using megaraid SAS RAID card since moving to shared tags [0].

Previously, when shared tags was shared sbitmap, this function was less
than optimum since we would iter through all tags for all hctx's,
yet only ever match upto tagset depth number of rqs.

Since the change to shared tags, things are even less efficient if we have
parallel callers of blk_mq_queue_tag_busy_iter(). This is because in
bt_iter() -> blk_mq_find_and_get_req() there would be more contention on
accessing each request ref and tags->lock since they are now shared among
all HW queues.

Optimise by having separate calls to bt_for_each() for when we're using
shared tags. In this case no longer pass a hctx, as it is no longer
relevant, and teach bt_iter() about this.

Ming suggested something along the lines of this change, apart from a
different implementation.

[0] https://lore.kernel.org/linux-block/e4e92abbe9d52bcba6b8cc6c91c442cc@mail.gmail.com/

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reported-and-tested-by: Kashyap Desai <kashyap.desai@broadcom.com>
Fixes: e155b0c238b2 ("blk-mq: Use shared tags for shared sbitmap support")
Link: https://lore.kernel.org/r/1638794990-137490-4-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: Delete busy_iter_fn
John Garry [Mon, 6 Dec 2021 12:49:49 +0000 (20:49 +0800)]
blk-mq: Delete busy_iter_fn

Typedefs busy_iter_fn and busy_tag_iter_fn are now identical, so delete
busy_iter_fn to reduce duplication.

It would be nicer to delete busy_tag_iter_fn, as the name busy_iter_fn is
less specific.

However busy_tag_iter_fn is used in many different parts of the tree,
unlike busy_iter_fn which is just use in block/, so just take the
straightforward path now, so that we could rename later treewide.

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Tested-by: Kashyap Desai <kashyap.desai@broadcom.com>
Link: https://lore.kernel.org/r/1638794990-137490-3-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: Drop busy_iter_fn blk_mq_hw_ctx argument
John Garry [Mon, 6 Dec 2021 12:49:48 +0000 (20:49 +0800)]
blk-mq: Drop busy_iter_fn blk_mq_hw_ctx argument

The only user of blk_mq_hw_ctx blk_mq_hw_ctx argument is
blk_mq_rq_inflight().

Function blk_mq_rq_inflight() uses the hctx to find the associated request
queue to match against the request. However this same check is already
done in caller bt_iter(), so drop this check.

With that change there are no more users of busy_iter_fn blk_mq_hw_ctx
argument, so drop the argument.

Reviewed-by Hannes Reinecke <hare@suse.de>

Signed-off-by: John Garry <john.garry@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Tested-by: Kashyap Desai <kashyap.desai@broadcom.com>
Link: https://lore.kernel.org/r/1638794990-137490-2-git-send-email-john.garry@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Mon, 6 Dec 2021 16:41:50 +0000 (09:41 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block:
  blk-mq: don't use plug->mq_list->q directly in blk_mq_run_dispatch_ops()
  blk-mq: don't run might_sleep() if the operation needn't blocking

3 years agoblk-mq: don't use plug->mq_list->q directly in blk_mq_run_dispatch_ops()
Ming Lei [Mon, 6 Dec 2021 03:33:50 +0000 (11:33 +0800)]
blk-mq: don't use plug->mq_list->q directly in blk_mq_run_dispatch_ops()

blk_mq_run_dispatch_ops() is defined as one macro, and plug->mq_list
will be changed when running 'dispatch_ops', so add one local variable
for holding request queue.

Reported-and-tested-by: Yi Zhang <yi.zhang@redhat.com>
Fixes: 4cafe86c9267 ("blk-mq: run dispatch lock once in case of issuing from list")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: don't run might_sleep() if the operation needn't blocking
Ming Lei [Mon, 6 Dec 2021 11:12:13 +0000 (19:12 +0800)]
blk-mq: don't run might_sleep() if the operation needn't blocking

The operation protected via blk_mq_run_dispatch_ops() in blk_mq_run_hw_queue
won't sleep, so don't run might_sleep() for it.

Reported-and-tested-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/io_uring' into for-next
Jens Axboe [Sun, 5 Dec 2021 15:56:32 +0000 (08:56 -0700)]
Merge branch 'for-5.17/io_uring' into for-next

* for-5.17/io_uring:
  io_uring: reuse io_req_task_complete for timeouts
  io_uring: tweak iopoll CQE_SKIP event counting
  io_uring: simplify selected buf handling
  io_uring: move up io_put_kbuf() and io_put_rw_kbuf()

3 years agoio_uring: reuse io_req_task_complete for timeouts
Pavel Begunkov [Sun, 5 Dec 2021 14:38:00 +0000 (14:38 +0000)]
io_uring: reuse io_req_task_complete for timeouts

With kbuf unification io_req_task_complete() is now a generic function,
use it for timeout's tw completions.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/7142fa3cbaf3a4140d59bcba45cbe168cf40fac2.1638714983.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: tweak iopoll CQE_SKIP event counting
Pavel Begunkov [Sun, 5 Dec 2021 14:37:59 +0000 (14:37 +0000)]
io_uring: tweak iopoll CQE_SKIP event counting

When iopolling the userspace specifies the minimum number of "events" it
expects. Previously, we had one CQE per request, so the definition of
an "event" was unequivocal, but that's not more the case anymore with
REQ_F_CQE_SKIP.

Currently it counts the number of completed requests, replace it with
the number of posted CQEs. This allows users of the "one CQE per link"
scheme to wait for all N links in a single syscall, which is not
possible without the patch and requires extra context switches.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/d5a965c4d2249827392037bbd0186f87fea49c55.1638714983.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: simplify selected buf handling
Pavel Begunkov [Sun, 5 Dec 2021 14:37:58 +0000 (14:37 +0000)]
io_uring: simplify selected buf handling

As selected buffers are now stored in a separate field in a request, get
rid of rw/recv specific helpers and simplify the code.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/bd4a866d8d91b044f748c40efff9e4eacd07536e.1638714983.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: move up io_put_kbuf() and io_put_rw_kbuf()
Hao Xu [Sun, 5 Dec 2021 14:37:57 +0000 (14:37 +0000)]
io_uring: move up io_put_kbuf() and io_put_rw_kbuf()

Move them up to avoid explicit declaration. We will use them in later
patches.

Reviewed-by: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Hao Xu <haoxu@linux.alibaba.com>
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3631243d6fc4a79bbba0cd62597fc8cd5be95924.1638714983.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Fri, 3 Dec 2021 21:51:46 +0000 (14:51 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block:
  blk-mq: run dispatch lock once in case of issuing from list
  blk-mq: pass request queue to blk_mq_run_dispatch_ops
  blk-mq: move srcu from blk_mq_hw_ctx to request_queue
  blk-mq: remove hctx_lock and hctx_unlock
  block: switch to atomic_t for request references
  block: move direct_IO into our own read_iter handler
  mm: move filemap_range_needs_writeback() into header

3 years agoblk-mq: run dispatch lock once in case of issuing from list
Ming Lei [Fri, 3 Dec 2021 13:15:34 +0000 (21:15 +0800)]
blk-mq: run dispatch lock once in case of issuing from list

It isn't necessary to call blk_mq_run_dispatch_ops() once for issuing
single request directly, and enough to do it one time when issuing from
whole list.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203131534.3668411-5-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: pass request queue to blk_mq_run_dispatch_ops
Ming Lei [Fri, 3 Dec 2021 13:15:33 +0000 (21:15 +0800)]
blk-mq: pass request queue to blk_mq_run_dispatch_ops

We have switched to allocate srcu into request queue, so it is fine
to pass request queue to blk_mq_run_dispatch_ops().

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203131534.3668411-4-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: move srcu from blk_mq_hw_ctx to request_queue
Ming Lei [Fri, 3 Dec 2021 13:15:32 +0000 (21:15 +0800)]
blk-mq: move srcu from blk_mq_hw_ctx to request_queue

In case of BLK_MQ_F_BLOCKING, per-hctx srcu is used to protect dispatch
critical area. However, this srcu instance stays at the end of hctx, and
it often takes standalone cacheline, often cold.

Inside srcu_read_lock() and srcu_read_unlock(), WRITE is always done on
the indirect percpu variable which is allocated from heap instead of
being embedded, srcu->srcu_idx is read only in srcu_read_lock(). It
doesn't matter if srcu structure stays in hctx or request queue.

So switch to per-request-queue srcu for protecting dispatch, and this
way simplifies quiesce a lot, not mention quiesce is always done on the
request queue wide.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203131534.3668411-3-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: remove hctx_lock and hctx_unlock
Ming Lei [Fri, 3 Dec 2021 13:15:31 +0000 (21:15 +0800)]
blk-mq: remove hctx_lock and hctx_unlock

Remove hctx_lock and hctx_unlock, and add one helper of
blk_mq_run_dispatch_ops() to run code block defined in dispatch_ops
with rcu/srcu read held.

Compared with hctx_lock()/hctx_unlock():

1) remove 2 branch to 1, so we just need to check
(hctx->flags & BLK_MQ_F_BLOCKING) once when running one dispatch_ops

2) srcu_idx needn't to be touched in case of non-blocking

3) might_sleep_if() can be moved to the blocking branch

Also put the added blk_mq_run_dispatch_ops() in private header, so that
the following patch can use it out of blk-mq.c.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203131534.3668411-2-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: switch to atomic_t for request references
Jens Axboe [Thu, 14 Oct 2021 20:39:59 +0000 (14:39 -0600)]
block: switch to atomic_t for request references

refcount_t is not as expensive as it used to be, but it's still more
expensive than the io_uring method of using atomic_t and just checking
for potential over/underflow.

This borrows that same implementation, which in turn is based on the
mm implementation from Linus.

Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: move direct_IO into our own read_iter handler
Jens Axboe [Thu, 28 Oct 2021 14:57:09 +0000 (08:57 -0600)]
block: move direct_IO into our own read_iter handler

Don't call into generic_file_read_iter() if we know it's O_DIRECT, just
set it up ourselves and call our own handler. This avoids an indirect call
for O_DIRECT.

Fall back to filemap_read() if we fail.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomm: move filemap_range_needs_writeback() into header
Jens Axboe [Thu, 28 Oct 2021 14:47:05 +0000 (08:47 -0600)]
mm: move filemap_range_needs_writeback() into header

No functional changes in this patch, just in preparation for efficiently
calling this light function from the block O_DIRECT handling.

Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/drivers' into for-next
Jens Axboe [Fri, 3 Dec 2021 13:36:36 +0000 (06:36 -0700)]
Merge branch 'for-5.17/drivers' into for-next

* for-5.17/drivers:
  block: null_blk: batched complete poll requests

3 years agoblock: null_blk: batched complete poll requests
Ming Lei [Fri, 3 Dec 2021 08:17:03 +0000 (16:17 +0800)]
block: null_blk: batched complete poll requests

Complete poll requests via blk_mq_add_to_batch() and
blk_mq_end_request_batch(), so that we can cover batched complete
code path by running null_blk test.

Meantime this way shows ~14% IOPS boost on 't/io_uring /dev/nullb0'
in my test.

Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203081703.3506020-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/drivers' into for-next
Jens Axboe [Fri, 3 Dec 2021 13:34:00 +0000 (06:34 -0700)]
Merge branch 'for-5.17/drivers' into for-next

* for-5.17/drivers:
  floppy: Add max size check for user space request
  floppy: Fix hang in watchdog when disk is ejected
  null_blk: allow zero poll queues

3 years agofloppy: Add max size check for user space request
Xiongwei Song [Tue, 16 Nov 2021 13:10:33 +0000 (21:10 +0800)]
floppy: Add max size check for user space request

We need to check the max request size that is from user space before
allocating pages. If the request size exceeds the limit, return -EINVAL.
This check can avoid the warning below from page allocator.

WARNING: CPU: 3 PID: 16525 at mm/page_alloc.c:5344 current_gfp_context include/linux/sched/mm.h:195 [inline]
WARNING: CPU: 3 PID: 16525 at mm/page_alloc.c:5344 __alloc_pages+0x45d/0x500 mm/page_alloc.c:5356
Modules linked in:
CPU: 3 PID: 16525 Comm: syz-executor.3 Not tainted 5.15.0-syzkaller #0
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014
RIP: 0010:__alloc_pages+0x45d/0x500 mm/page_alloc.c:5344
Code: be c9 00 00 00 48 c7 c7 20 4a 97 89 c6 05 62 32 a7 0b 01 e8 74 9a 42 07 e9 6a ff ff ff 0f 0b e9 a0 fd ff ff 40 80 e5 3f eb 88 <0f> 0b e9 18 ff ff ff 4c 89 ef 44 89 e6 45 31 ed e8 1e 76 ff ff e9
RSP: 0018:ffffc90023b87850 EFLAGS: 00010246
RAX: 0000000000000000 RBX: 1ffff92004770f0b RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000000033 RDI: 0000000000010cc1
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000001
R10: ffffffff81bb4686 R11: 0000000000000001 R12: ffffffff902c1960
R13: 0000000000000033 R14: 0000000000000000 R15: ffff88804cf64a30
FS:  0000000000000000(0000) GS:ffff88802cd00000(0063) knlGS:00000000f44b4b40
CS:  0010 DS: 002b ES: 002b CR0: 0000000080050033
CR2: 000000002c921000 CR3: 000000004f507000 CR4: 0000000000150ee0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 alloc_pages+0x1a7/0x300 mm/mempolicy.c:2191
 __get_free_pages+0x8/0x40 mm/page_alloc.c:5418
 raw_cmd_copyin drivers/block/floppy.c:3113 [inline]
 raw_cmd_ioctl drivers/block/floppy.c:3160 [inline]
 fd_locked_ioctl+0x12e5/0x2820 drivers/block/floppy.c:3528
 fd_ioctl drivers/block/floppy.c:3555 [inline]
 fd_compat_ioctl+0x891/0x1b60 drivers/block/floppy.c:3869
 compat_blkdev_ioctl+0x3b8/0x810 block/ioctl.c:662
 __do_compat_sys_ioctl+0x1c7/0x290 fs/ioctl.c:972
 do_syscall_32_irqs_on arch/x86/entry/common.c:112 [inline]
 __do_fast_syscall_32+0x65/0xf0 arch/x86/entry/common.c:178
 do_fast_syscall_32+0x2f/0x70 arch/x86/entry/common.c:203
 entry_SYSENTER_compat_after_hwframe+0x4d/0x5c

Reported-by: syzbot+23a02c7df2cf2bc93fa2@syzkaller.appspotmail.com
Link: https://lore.kernel.org/r/20211116131033.27685-1-sxwjean@me.com
Signed-off-by: Xiongwei Song <sxwjean@gmail.com>
Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofloppy: Fix hang in watchdog when disk is ejected
Tasos Sahanidis [Fri, 3 Sep 2021 06:47:58 +0000 (09:47 +0300)]
floppy: Fix hang in watchdog when disk is ejected

When the watchdog detects a disk change, it calls cancel_activity(),
which in turn tries to cancel the fd_timer delayed work.

In the above scenario, fd_timer_fn is set to fd_watchdog(), meaning
it is trying to cancel its own work.
This results in a hang as cancel_delayed_work_sync() is waiting for the
watchdog (itself) to return, which never happens.

This can be reproduced relatively consistently by attempting to read a
broken floppy, and ejecting it while IO is being attempted and retried.

To resolve this, this patch calls cancel_delayed_work() instead, which
cancels the work without waiting for the watchdog to return and finish.

Before this regression was introduced, the code in this section used
del_timer(), and not del_timer_sync() to delete the watchdog timer.

Link: https://lore.kernel.org/r/399e486c-6540-db27-76aa-7a271b061f76@tasossah.com
Fixes: 070ad7e793dc ("floppy: convert to delayed work and single-thread wq")
Signed-off-by: Tasos Sahanidis <tasos@tasossah.com>
Signed-off-by: Denis Efremov <efremov@linux.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonull_blk: allow zero poll queues
Ming Lei [Fri, 3 Dec 2021 02:39:35 +0000 (10:39 +0800)]
null_blk: allow zero poll queues

There isn't any reason to not allow zero poll queues from user
viewpoint.

Also sometimes we need to compare io poll between poll mode and irq
mode, so not allowing poll queues is bad.

Fixes: 15dfc662ef31 ("null_blk: Fix handling of submit_queues and poll_queues attributes")
Cc: Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211203023935.3424042-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Fri, 3 Dec 2021 02:39:19 +0000 (19:39 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block:
  block: fix double bio queue when merging in cached request path
  block: get rid of useless goto and label in blk_mq_get_new_requests()

3 years agoblock: fix double bio queue when merging in cached request path
Jens Axboe [Thu, 2 Dec 2021 19:43:46 +0000 (12:43 -0700)]
block: fix double bio queue when merging in cached request path

When we attempt to merge off the cached request path, we return NULL
if successful. This makes the caller believe that it's should allocate
a new request, and hence we end up with the bio both merged and associated
with a new request. This, predictably, leads to all sorts of crashes.

Pass in a pointer to the bio pointer, and clear it for the merge case.
Then the caller knows that the bio is already queued, and no new requests
need to get allocated.

Fixes: 5b13bc8a3fd5 ("blk-mq: cleanup request allocation")
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: get rid of useless goto and label in blk_mq_get_new_requests()
Jens Axboe [Thu, 2 Dec 2021 19:42:58 +0000 (12:42 -0700)]
block: get rid of useless goto and label in blk_mq_get_new_requests()

Expected case is returning a request, just check for success and return
the request rather than having an error label.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Thu, 2 Dec 2021 15:20:56 +0000 (08:20 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block:
  blk-mq: check q->poll_stat in queue_poll_stat_show

3 years agoblk-mq: check q->poll_stat in queue_poll_stat_show
Ming Lei [Thu, 2 Dec 2021 09:07:16 +0000 (17:07 +0800)]
blk-mq: check q->poll_stat in queue_poll_stat_show

Without checking q->poll_stat in queue_poll_stat_show(), kernel panic
may be caused if q->poll_stat isn't allocated.

Fixes: 48b5c1fbcd8c ("block: only allocate poll_stats if there's a user of them")
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211202090716.3292244-1-ming.lei@redhat.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/io_uring' into for-next
Jens Axboe [Mon, 29 Nov 2021 13:48:22 +0000 (06:48 -0700)]
Merge branch 'for-5.17/io_uring' into for-next

* for-5.17/io_uring:
  io_uring: validate timespec for timeout removals

3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Mon, 29 Nov 2021 13:48:20 +0000 (06:48 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block:
  block: Fix fsync always failed if once failed

3 years agoio_uring: validate timespec for timeout removals
Ye Bin [Mon, 29 Nov 2021 04:15:37 +0000 (12:15 +0800)]
io_uring: validate timespec for timeout removals

Like commit f6223ff79966, timeout removal should also validate the
timespec that is being passed in.

Signed-off-by: Ye Bin <yebin10@huawei.com>
Link: https://lore.kernel.org/r/20211129041537.1936270-1-yebin10@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: Fix fsync always failed if once failed
Ye Bin [Mon, 29 Nov 2021 01:26:59 +0000 (09:26 +0800)]
block: Fix fsync always failed if once failed

We do test with inject error fault base on v4.19, after test some time we found
sync /dev/sda always failed.
[root@localhost] sync /dev/sda
sync: error syncing '/dev/sda': Input/output error

scsi log as follows:
[19069.812296] sd 0:0:0:0: [sda] tag#64 Send: scmd 0x00000000d03a0b6b
[19069.812302] sd 0:0:0:0: [sda] tag#64 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[19069.812533] sd 0:0:0:0: [sda] tag#64 Done: SUCCESS Result: hostbyte=DID_OK driverbyte=DRIVER_OK
[19069.812536] sd 0:0:0:0: [sda] tag#64 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
[19069.812539] sd 0:0:0:0: [sda] tag#64 scsi host busy 1 failed 0
[19069.812542] sd 0:0:0:0: Notifying upper driver of completion (result 0)
[19069.812546] sd 0:0:0:0: [sda] tag#64 sd_done: completed 0 of 0 bytes
[19069.812549] sd 0:0:0:0: [sda] tag#64 0 sectors total, 0 bytes done.
[19069.812564] print_req_error: I/O error, dev sda, sector 0

ftrace log as follows:
 rep-306069 [007] .... 19654.923315: block_bio_queue: 8,0 FWS 0 + 0 [rep]
 rep-306069 [007] .... 19654.923333: block_getrq: 8,0 FWS 0 + 0 [rep]
 kworker/7:1H-250   [007] .... 19654.923352: block_rq_issue: 8,0 FF 0 () 0 + 0 [kworker/7:1H]
 <idle>-0     [007] ..s. 19654.923562: block_rq_complete: 8,0 FF () 18446744073709551615 + 0 [0]
 <idle>-0     [007] d.s. 19654.923576: block_rq_complete: 8,0 WS () 0 + 0 [-5]

As 8d6996630c03 introduce 'fq->rq_status', this data only update when 'flush_rq'
reference count isn't zero. If flush request once failed and record error code
in 'fq->rq_status'. If there is no chance to update 'fq->rq_status',then do fsync
will always failed.
To address this issue reset 'fq->rq_status' after return error code to upper layer.

Fixes: 8d6996630c03("block: fix null pointer dereference in blk_mq_rq_timed_out()")
Signed-off-by: Ye Bin <yebin10@huawei.com>
Reviewed-by: Ming Lei <ming.lei@redhat.com>
Link: https://lore.kernel.org/r/20211129012659.1553733-1-yebin10@huawei.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.17/drivers' into for-next
Jens Axboe [Mon, 29 Nov 2021 13:42:08 +0000 (06:42 -0700)]
Merge branch 'for-5.17/drivers' into for-next

* for-5.17/drivers:
  loop: don't hold lo_mutex during __loop_clr_fd()

3 years agoMerge branch 'for-5.17/block' into for-next
Jens Axboe [Mon, 29 Nov 2021 13:42:05 +0000 (06:42 -0700)]
Merge branch 'for-5.17/block' into for-next

* for-5.17/block: (72 commits)
  scsi: remove the gendisk argument to scsi_ioctl
  block: remove the gendisk argument to blk_execute_rq
  block: remove the ->rq_disk field in struct request
  block: don't check ->rq_disk in merges
  mtd_blkdevs: remove the sector out of range check in do_blktrans_request
  block: Remove redundant initialization of variable ret
  block: simplify ioc_lookup_icq
  block: simplify ioc_create_icq
  block: return the io_context from create_task_io_context
  block: use alloc_io_context in __copy_io
  block: factor out a alloc_io_context helper
  block: remove get_io_context_active
  block: move the remaining elv.icq handling to the I/O scheduler
  block: move blk_mq_sched_assign_ioc to blk-ioc.c
  block: mark put_io_context_active static
  Revert "block: Provide blk_mq_sched_get_icq()"
  bfq: use bfq_bic_lookup in bfq_limit_depth
  bfq: simplify bfq_bic_lookup
  fork: move copy_io to block/blk-ioc.c
  RDMA/qib: rename copy_io to qib_copy_io
  ...

3 years agoMerge branch 'for-5.17/io_uring' into for-next
Jens Axboe [Mon, 29 Nov 2021 13:42:03 +0000 (06:42 -0700)]
Merge branch 'for-5.17/io_uring' into for-next

* for-5.17/io_uring:
  io_uring: better to use REQ_F_IO_DRAIN for req->flags
  io_uring: fix no lock protection for ctx->cq_extra
  io_uring: disable drain with cqe skip
  io_uring: don't spinlock when not posting CQEs
  io_uring: add option to skip CQE posting
  io_uring: clean cqe filling functions
  io_uring: improve argument types of kiocb_done()
  io_uring: clean __io_import_iovec()
  io_uring: improve send/recv error handling
  io_uring: simplify reissue in kiocb_done

3 years agoloop: don't hold lo_mutex during __loop_clr_fd()
Tetsuo Handa [Wed, 24 Nov 2021 10:47:40 +0000 (19:47 +0900)]
loop: don't hold lo_mutex during __loop_clr_fd()

syzbot is reporting circular locking problem at __loop_clr_fd() [1], for
commit 87579e9b7d8dc36e ("loop: use worker per cgroup instead of kworker")
is calling destroy_workqueue() with lo->lo_mutex held.

Since all functions where lo->lo_state matters are already checking
lo->lo_state with lo->lo_mutex held (in order to avoid racing with e.g.
ioctl(LOOP_CTL_REMOVE)), and __loop_clr_fd() can be called from either
ioctl(LOOP_CLR_FD) xor close(), lo->lo_state == Lo_rundown is considered
as an exclusive lock for __loop_clr_fd(). Therefore, hold lo->lo_mutex
inside __loop_clr_fd() only when asserting/updating lo->lo_state.

Since ioctl(LOOP_CLR_FD) depends on lo->lo_state == Lo_bound, a valid
lo->lo_backing_file must have been assigned by ioctl(LOOP_SET_FD) or
ioctl(LOOP_CONFIGURE). Thus, we can remove lo->lo_backing_file test,
and convert __loop_clr_fd() into a void function.

Link: https://syzkaller.appspot.com/bug?extid=63614029dfb79abd4383
Reported-by: syzbot <syzbot+63614029dfb79abd4383@syzkaller.appspotmail.com>
Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/8ebe3b2e-8975-7f26-0620-7144a3b8b8cd@i-love.sakura.ne.jp
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoscsi: remove the gendisk argument to scsi_ioctl
Christoph Hellwig [Fri, 26 Nov 2021 12:18:02 +0000 (13:18 +0100)]
scsi: remove the gendisk argument to scsi_ioctl

Now that blk_execute_rq does not take a gendisk argument there is no need
to pass it through the scsi_ioctl callchain either.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211126121802.2090656-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove the gendisk argument to blk_execute_rq
Christoph Hellwig [Fri, 26 Nov 2021 12:18:01 +0000 (13:18 +0100)]
block: remove the gendisk argument to blk_execute_rq

Remove the gendisk aregument to blk_execute_rq and blk_execute_rq_nowait
given that it is unused now.  Also convert the boolean at_head parameter
to actually use the bool type while touching the prototype.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211126121802.2090656-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove the ->rq_disk field in struct request
Christoph Hellwig [Fri, 26 Nov 2021 12:18:00 +0000 (13:18 +0100)]
block: remove the ->rq_disk field in struct request

Just use the disk attached to the request_queue instead.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211126121802.2090656-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: don't check ->rq_disk in merges
Christoph Hellwig [Fri, 26 Nov 2021 12:17:59 +0000 (13:17 +0100)]
block: don't check ->rq_disk in merges

There is a 1:1 relationship between request_queues and gendisks now, so
no need for these extra checks.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211126121802.2090656-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomtd_blkdevs: remove the sector out of range check in do_blktrans_request
Christoph Hellwig [Fri, 26 Nov 2021 12:17:58 +0000 (13:17 +0100)]
mtd_blkdevs: remove the sector out of range check in do_blktrans_request

The block layer already performs this check, no need to duplicate it in
the driver.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Miquel Raynal <miquel.raynal@bootlin.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Link: https://lore.kernel.org/r/20211126121802.2090656-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: Remove redundant initialization of variable ret
Colin Ian King [Fri, 26 Nov 2021 23:06:52 +0000 (23:06 +0000)]
block: Remove redundant initialization of variable ret

The variable ret is being initialized with a value that is never
read, it is being updated later on. The assignment is redundant and
can be removed.

Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
Link: https://lore.kernel.org/r/20211126230652.1175636-1-colin.i.king@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: simplify ioc_lookup_icq
Christoph Hellwig [Fri, 26 Nov 2021 11:58:17 +0000 (12:58 +0100)]
block: simplify ioc_lookup_icq

Remove the ioc argument as it always points to current->io_context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-15-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: simplify ioc_create_icq
Christoph Hellwig [Fri, 26 Nov 2021 11:58:16 +0000 (12:58 +0100)]
block: simplify ioc_create_icq

Remove the ioc and gfp_mask argument, which are hard coded by the caller.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-14-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: return the io_context from create_task_io_context
Christoph Hellwig [Fri, 26 Nov 2021 11:58:15 +0000 (12:58 +0100)]
block: return the io_context from create_task_io_context

Grab a reference to the newly allocated or existing io_context in
create_task_io_context and return it.  This simplifies the callers and
removes the need for double lookups.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-13-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: use alloc_io_context in __copy_io
Christoph Hellwig [Fri, 26 Nov 2021 11:58:14 +0000 (12:58 +0100)]
block: use alloc_io_context in __copy_io

In __copy_io we know that the newly allocate task_struct does not have
an I/O context yet and is not exiting.  So just allocate the I/O context
struct and install it directly.  There is no need to lock the task
either as it is just being created.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-12-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: factor out a alloc_io_context helper
Christoph Hellwig [Fri, 26 Nov 2021 11:58:13 +0000 (12:58 +0100)]
block: factor out a alloc_io_context helper

Factor out a helper that just allocate an I/O context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-11-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: remove get_io_context_active
Christoph Hellwig [Fri, 26 Nov 2021 11:58:12 +0000 (12:58 +0100)]
block: remove get_io_context_active

Fold it into it's only caller, and remove a lof of the debug checks
that are not needed.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-10-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: move the remaining elv.icq handling to the I/O scheduler
Christoph Hellwig [Fri, 26 Nov 2021 11:58:11 +0000 (12:58 +0100)]
block: move the remaining elv.icq handling to the I/O scheduler

After the prepare side has been moved to the only I/O scheduler that
cares, do the same for the cleanup and the NULL initialization.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-9-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: move blk_mq_sched_assign_ioc to blk-ioc.c
Christoph Hellwig [Fri, 26 Nov 2021 11:58:10 +0000 (12:58 +0100)]
block: move blk_mq_sched_assign_ioc to blk-ioc.c

Move blk_mq_sched_assign_ioc so that many interfaces from the file can
be marked static.  Rename the function to ioc_find_get_icq as well and
return the icq to simplify the interface.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-8-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblock: mark put_io_context_active static
Christoph Hellwig [Fri, 26 Nov 2021 11:58:09 +0000 (12:58 +0100)]
block: mark put_io_context_active static

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-7-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoRevert "block: Provide blk_mq_sched_get_icq()"
Christoph Hellwig [Fri, 26 Nov 2021 11:58:08 +0000 (12:58 +0100)]
Revert "block: Provide blk_mq_sched_get_icq()"

This reverts commit 4896c4e64ba5d5d5acdbcf68c5910dd4f6d8fa62.

The helper is not needed any more.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-6-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobfq: use bfq_bic_lookup in bfq_limit_depth
Christoph Hellwig [Fri, 26 Nov 2021 11:58:07 +0000 (12:58 +0100)]
bfq: use bfq_bic_lookup in bfq_limit_depth

No need to create a new I/O context if there is none present yet in
->limit_depth.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-5-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobfq: simplify bfq_bic_lookup
Christoph Hellwig [Fri, 26 Nov 2021 11:58:06 +0000 (12:58 +0100)]
bfq: simplify bfq_bic_lookup

Remove the unused bfqd argument, and hardcode ioc to current->io_context.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-4-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofork: move copy_io to block/blk-ioc.c
Christoph Hellwig [Fri, 26 Nov 2021 11:58:05 +0000 (12:58 +0100)]
fork: move copy_io to block/blk-ioc.c

Move the copying of the I/O context to the block layer as that is where
we can use the proper low-level interfaces.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-3-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoRDMA/qib: rename copy_io to qib_copy_io
Christoph Hellwig [Fri, 26 Nov 2021 11:58:04 +0000 (12:58 +0100)]
RDMA/qib: rename copy_io to qib_copy_io

Add the proper module prefix to avoid conflicts with a function
in the scheduler.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20211126115817.2087431-2-hch@lst.de
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoblk-mq: use bio->bi_opf after bio is checked
Ming Lei [Fri, 26 Nov 2021 16:19:43 +0000 (00:19 +0800)]
blk-mq: use bio->bi_opf after bio is checked

bio->bi_opf isn't finalized before checking the bio, so use it after
submit_bio_checks() returns.

Fixes: 5b13bc8a3fd5 ("blk-mq: cleanup request allocation")
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobfq: Do not let waker requests skip proper accounting
Jan Kara [Thu, 25 Nov 2021 13:36:41 +0000 (14:36 +0100)]
bfq: Do not let waker requests skip proper accounting

Commit 7cc4ffc55564 ("block, bfq: put reqs of waker and woken in
dispatch list") added a condition to bfq_insert_request() which added
waker's requests directly to dispatch list. The rationale was that
completing waker's IO is needed to get more IO for the current queue.
Although this rationale is valid, there is a hole in it. The waker does
not necessarily serve the IO only for the current queue and maybe it's
current IO is not needed for current queue to make progress. Furthermore
injecting IO like this completely bypasses any service accounting within
bfq and thus we do not properly track how much service is waker's queue
getting or that the waker is actually doing any IO. Depending on the
conditions this can result in the waker getting too much or too few
service.

Consider for example the following job file:

[global]
directory=/mnt/repro/
rw=write
size=8g
time_based
runtime=30
ramp_time=10
blocksize=1m
direct=0
ioengine=sync

[slowwriter]
numjobs=1
prioclass=2
prio=7
fsync=200

[fastwriter]
numjobs=1
prioclass=2
prio=0
fsync=200

Despite processes have very different IO priorities, they get the same
about of service. The reason is that bfq identifies these processes as
having waker-wakee relationship and once that happens, IO from
fastwriter gets injected during slowwriter's time slice. As a result bfq
is not aware that fastwriter has any IO to do and constantly schedules
only slowwriter's queue. Thus fastwriter is forced to compete with
slowwriter's IO all the time instead of getting its share of time based
on IO priority.

Drop the special injection condition from bfq_insert_request(). As a
result, requests will be tracked and queued in a normal way and on next
dispatch bfq_select_queue() can decide whether the waker's inserted
requests should be injected during the current queue's timeslice or not.

Fixes: 7cc4ffc55564 ("block, bfq: put reqs of waker and woken in dispatch list")
Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211125133645.27483-8-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agobfq: Log waker detections
Jan Kara [Thu, 25 Nov 2021 13:36:40 +0000 (14:36 +0100)]
bfq: Log waker detections

Waker - wakee relationships are important in deciding whether one queue
can preempt the other one. Print information about detected waker-wakee
relationships so that scheduling decisions can be better understood from
block traces.

Acked-by: Paolo Valente <paolo.valente@linaro.org>
Signed-off-by: Jan Kara <jack@suse.cz>
Link: https://lore.kernel.org/r/20211125133645.27483-7-jack@suse.cz
Signed-off-by: Jens Axboe <axboe@kernel.dk>