Re: [PATCH] virtio_blk: fix race between start and stop queue

From: Jens Axboe
Date: Fri May 16 2014 - 10:57:45 EST


On 2014-05-16 08:53, Jens Axboe wrote:
On 2014-05-15 06:33, Ming Lei wrote:
When there isn't enough vring descriptor for adding to vq,
blk-mq will be put as stopped state until some of pending
descriptors are completed & freed.

Unfortunately, the vq's interrupt may come just before
blk-mq's BLK_MQ_S_STOPPED flag is set, so the blk-mq will
still be kept as stopped even though lots of descriptors
are completed and freed in the interrupt handler. The worst
case is that all pending descriptors are freed in the
interrupt handler, and the queue is kept as stopped forever.

This patch fixes the problem by starting/stopping blk-mq
with holding vq_lock.

Why not just use blk_mq_start_hw_queues()?

Or, if you want to maintain current heuristics, just move the start and stop under the vq_lock. That should prevent the race, as far as I can tell. Not sure what that extra queue_stopped would buy you, seems a lot cleaner to just maintain this state exclusively in the queue.

--
Jens Axboe

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 7a51f065edcd..2e328231a795 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -147,11 +147,12 @@ static void virtblk_done(struct virtqueue *vq)
if (unlikely(virtqueue_is_broken(vq)))
break;
} while (!virtqueue_enable_cb(vq));
- spin_unlock_irqrestore(&vblk->vq_lock, flags);

/* In case queue is stopped waiting for more buffers. */
if (req_done)
blk_mq_start_stopped_hw_queues(vblk->disk->queue, true);
+
+ spin_unlock_irqrestore(&vblk->vq_lock, flags);
}

static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
@@ -205,8 +206,8 @@ static int virtio_queue_rq(struct blk_mq_hw_ctx *hctx, struct request *req)
err = __virtblk_add_req(vblk->vq, vbr, vbr->sg, num);
if (err) {
virtqueue_kick(vblk->vq);
- spin_unlock_irqrestore(&vblk->vq_lock, flags);
blk_mq_stop_hw_queue(hctx);
+ spin_unlock_irqrestore(&vblk->vq_lock, flags);
/* Out of mem doesn't actually happen, since we fall back
* to direct descriptors */
if (err == -ENOMEM || err == -ENOSPC)