]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
nvme/pci: No special case for queue busy on IO
authorKeith Busch <keith.busch@intel.com>
Fri, 10 Feb 2017 23:15:52 +0000 (18:15 -0500)
committerAshok Vairavan <ashok.vairavan@oracle.com>
Wed, 19 Jul 2017 20:00:31 +0000 (13:00 -0700)
This driver previously required we have a special check for IO submitted
to nvme IO queues that are temporarily suspended. That is no longer
necessary since blk-mq provides a quiesce, so any IO that actually gets
submitted to such a queue must be ended since the queue isn't going to
start back up.

This is fixing a condition where we have fewer IO queues after a
controller reset. This may happen if the number of CPU's has changed,
or controller firmware update changed the queue count, for example.

While it may be possible to complete the IO on a different queue, the
block layer does not provide a way to resubmit a request on a different
hardware context once the request has entered the queue. We don't want
these requests to be stuck indefinitely either, so ending them in error
is our only option at the moment.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 9ef3932e250f8e2e11ffbc0c1f28b3ba5dc40cd6)

Orabug: 26486098

UEK4 blk-mq module doesn't have the quescing capability. So the requests
should fail if a namespace is dead.

Signed-off-by: Ashok Vairavan <ashok.vairavan@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
drivers/nvme/host/pci.c

index 62ffd620828a4378ad5a4f558e381b6645dd2337..5edff62f980cc1a1a934e36877fc89729f52c009 100644 (file)
@@ -657,10 +657,7 @@ static int nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
 
        spin_lock_irq(&nvmeq->q_lock);
        if (unlikely(nvmeq->cq_vector < 0)) {
-               if (ns && !test_bit(NVME_NS_DEAD, &ns->flags))
-                       ret = BLK_MQ_RQ_QUEUE_BUSY;
-               else
-                       ret = BLK_MQ_RQ_QUEUE_ERROR;
+               ret = BLK_MQ_RQ_QUEUE_ERROR;
                spin_unlock_irq(&nvmeq->q_lock);
                goto out;
        }