[RFC PATCH] nvme: prevent hang on surprise removal of NVMe disk

From: Markus Blöchl
Date: Mon Feb 14 2022 - 05:14:29 EST


After the surprise removal of a mounted NVMe disk the pciehp task
reliably hangs forever with a trace similar to this one:

INFO: task irq/43-pciehp:64 blocked for more than 120 seconds.
Call Trace:
<TASK>
__bio_queue_enter
blk_mq_submit_bio
submit_bio_noacct
submit_bio_wait
blkdev_issue_flush
ext4_sync_fs
sync_filesystem
fsync_bdev
delete_partition
blk_drop_partitions
del_gendisk
nvme_ns_remove
nvme_remove_namespaces
nvme_remove
pci_device_remove
__device_release_driver
device_release_driver
pci_stop_bus_device
pci_stop_and_remove_bus_device
pciehp_unconfigure_device
pciehp_disable_slot
pciehp_handle_presence_or_link_change
pciehp_ist
</TASK>

I observed this with 5.15.5 from debian bullseye-backports and confirmed
with 5.17.0-rc3 but previous kernels may be affected as well.

I read that del_gendisk() prevents any new I/O only after
flushing and dropping all partitions.
But in case of a surprise removal any new blocking I/O must be prevented
first. I assume that nvme_set_queue_dying() is supposed to do that.
Is there any other mechanism in place which should achieve this?

Unfortunately I am not very familiar with the blk_mq infrastructure so
any comments and suggestions are very welcome.

Best regards,

Markus


Signed-off-by: Markus Blöchl <markus.bloechl@xxxxxxxxxxxxx>
---
drivers/nvme/host/core.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 961a5f8a44d2..0654cbe9b80e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4573,6 +4573,8 @@ static void nvme_set_queue_dying(struct nvme_ns *ns)
if (test_and_set_bit(NVME_NS_DEAD, &ns->flags))
return;

+ set_bit(GD_DEAD, &ns->disk->state);
+
blk_set_queue_dying(ns->queue);
nvme_start_ns_queue(ns);


base-commit: f1baf68e1383f6ed93eb9cff2866d46562607a43
--
2.35.1