Re: [PATCH] nvme: fix reconnection fail due to reserved tag allocation

From: Sagi Grimberg
Date: Thu Mar 07 2024 - 05:35:22 EST




On 07/03/2024 12:32, 许春光 wrote:
Thanks for review, seems that we should revert this patch
ed01fee283a0, ed01fee283a0 seems just a alone 'optimization'. If no
double, I will send another patch.

Not a revert, but a fix with a Fixes tag. Just use NVMF_ADMIN_RESERVED_TAGS and NVMF_IO_RESERVED_TAGS.



Thanks

Sagi Grimberg <sagi@xxxxxxxxxxx> 于2024年3月7日周四 17:36写道:


On 28/02/2024 11:14, brookxu.cn wrote:
From: Chunguang Xu <chunguang.xu@xxxxxxxxxx>

We found a issue on production environment while using NVMe
over RDMA, admin_q reconnect failed forever while remote
target and network is ok. After dig into it, we found it
may caused by a ABBA deadlock due to tag allocation. In my
case, the tag was hold by a keep alive request waiting
inside admin_q, as we quiesced admin_q while reset ctrl,
so the request maked as idle and will not process before
reset success. As fabric_q shares tagset with admin_q,
while reconnect remote target, we need a tag for connect
command, but the only one reserved tag was held by keep
alive command which waiting inside admin_q. As a result,
we failed to reconnect admin_q forever.

In order to workaround this issue, I think we should not
retry keep alive request while controller reconnecting,
as we have stopped keep alive while resetting controller,
and will start it again while init finish, so it maybe ok
to drop it.
This is the wrong fix.
First we should note that this is a regression caused by:
ed01fee283a0 ("nvme-fabrics: only reserve a single tag")

Then, you need to restore reserving two tags for the admin
tagset.