Re: [PATCH] IB/hfi1: Fix potential deadlock on &sde->flushlist_lock

From: Leon Romanovsky
Date: Tue Jul 04 2023 - 07:49:02 EST


On Wed, Jun 28, 2023 at 04:59:25AM +0000, Chengfeng Ye wrote:
> As &sde->flushlist_lock is acquired by timer sdma_err_progress_check()
> through layer of calls under softirq context, other process
> context code acquiring the lock should disable irq.
>
> Possible deadlock scenario
> sdma_send_txreq()
> -> spin_lock(&sde->flushlist_lock)
> <timer interrupt>
> -> sdma_err_progress_check()
> -> __sdma_process_event()
> -> sdma_set_state()
> -> sdma_flush()
> -> spin_lock_irqsave(&sde->flushlist_lock, flags) (deadlock here)
>
> This flaw was found using an experimental static analysis tool we are
> developing for irq-related deadlock.
>
> The tentative patch fix the potential deadlock by spin_lock_irqsave().
>
> Signed-off-by: Chengfeng Ye <dg573847474@xxxxxxxxx>
> ---
> drivers/infiniband/hw/hfi1/sdma.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/infiniband/hw/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
> index bb2552dd29c1..0431f575c861 100644
> --- a/drivers/infiniband/hw/hfi1/sdma.c
> +++ b/drivers/infiniband/hw/hfi1/sdma.c
> @@ -2371,9 +2371,9 @@ int sdma_send_txreq(struct sdma_engine *sde,
> tx->sn = sde->tail_sn++;
> trace_hfi1_sdma_in_sn(sde, tx->sn);
> #endif
> - spin_lock(&sde->flushlist_lock);
> + spin_lock_irqsave(&sde->flushlist_lock, flags);
> list_add_tail(&tx->list, &sde->flushlist);
> - spin_unlock(&sde->flushlist_lock);
> + spin_unlock_irqrestore(&sde->flushlist_lock, flags);
> iowait_inc_wait_count(wait, tx->num_desc);
> queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
> ret = -ECOMM;

It can't work as exactly after "ret = -ECOMM;" line, there is "goto unlock"
and there hfi1 calls to spin_unlock_irqrestore(..) with same "flags".

Plus, we already in context where interrupts are stopped.

Thanks

> @@ -2459,7 +2459,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
> *count_out = total_count;
> return ret;
> unlock_noconn:
> - spin_lock(&sde->flushlist_lock);
> + spin_lock_irqsave(&sde->flushlist_lock, flags);
> list_for_each_entry_safe(tx, tx_next, tx_list, list) {
> tx->wait = iowait_ioww_to_iow(wait);
> list_del_init(&tx->list);
> @@ -2472,7 +2472,7 @@ int sdma_send_txlist(struct sdma_engine *sde, struct iowait_work *wait,
> flush_count++;
> iowait_inc_wait_count(wait, tx->num_desc);
> }
> - spin_unlock(&sde->flushlist_lock);
> + spin_unlock_irqrestore(&sde->flushlist_lock, flags);
> queue_work_on(sde->cpu, system_highpri_wq, &sde->flush_worker);
> ret = -ECOMM;
> goto update_tail;
> --
> 2.17.1
>