Re: Staging: unisys/verisonic: Correct double unlock

From: Neil Horman
Date: Mon Apr 04 2016 - 10:35:30 EST


On Sat, Apr 02, 2016 at 11:20:14PM +0000, Sell, Timothy C wrote:
> > -----Original Message-----
> > From: Iban Rodriguez [mailto:iban.rodriguez@xxxxxxx]
> > Sent: Saturday, April 02, 2016 1:47 PM
> > To: Kershner, David A; Greg Kroah-Hartman; Benjamin Romer; Sell, Timothy
> > C; Neil Horman
> > Cc: *S-Par-Maintainer; devel@xxxxxxxxxxxxxxxxxxxx; linux-
> > kernel@xxxxxxxxxxxxxxx; Iban Rodriguez
> > Subject: Staging: unisys/verisonic: Correct double unlock
> >
> > 'priv_lock' is unlocked twice. The first one is removed and
> > the function 'visornic_serverdown_complete' is now called with
> > 'priv_lock' locked because 'devdata' is modified inside.
> >
> > Signed-off-by: Iban Rodriguez <iban.rodriguez@xxxxxxx>
> > ---
> > drivers/staging/unisys/visornic/visornic_main.c | 1 -
> > 1 file changed, 1 deletion(-)
> >
> > diff --git a/drivers/staging/unisys/visornic/visornic_main.c
> > b/drivers/staging/unisys/visornic/visornic_main.c
> > index be0d057346c3..af03f2938fe9 100644
> > --- a/drivers/staging/unisys/visornic/visornic_main.c
> > +++ b/drivers/staging/unisys/visornic/visornic_main.c
> > @@ -368,7 +368,6 @@ visornic_serverdown(struct visornic_devdata
> > *devdata,
> > }
> > devdata->server_change_state = true;
> > devdata->server_down_complete_func = complete_func;
> > - spin_unlock_irqrestore(&devdata->priv_lock, flags);
> > visornic_serverdown_complete(devdata);
> > } else if (devdata->server_change_state) {
> > dev_dbg(&devdata->dev->device, "%s changing state\n",
>
> I agree there is a bug here involving priv_lock being unlocked
> twice, but this patch isn't the appropriate fix. Reason is, we can NOT
> call visornic_serverdown_complete() while holding a spinlock
> (which is what this patch would cause to occur) because
> visornic_serverdown_complete() might block when it calls
> rtnl_lock() in this code sequence (rtnl_lock() grabs a mutex):
>
> rtnl_lock();
> dev_close(netdev);
> rtnl_unlock();
>
> Blocking with a spinlock held is always a bad idea. :-(
>

You should just get rid of the priv_lock entirely, its not needed.

priv_lock is used the following functions:

visornic_serverdown - only called at the end of a tx_timeout reset operation, so
you are sure that the rx and tx paths are quiesced (i.e. no data access
happening)

visornic_disable_with_timeout - move the netif_stop_queue operation to the top
of this function and you will be guaranteed no concurrent access in the tx path

visornic_enable_with_timeout - same as above, make sure that netif_start_queue
and napi_enable are at the end of the function and you are guarantted no
concurrent access.

visornic_xmit - The queue lock in the netdev_start_xmit routine guarantees you
single access here from multiple transmits.

visornic_xmit_timeout - only called on a tx timeout, when you are guaranteed not
to have concurrent transmit occuing, by definition.

visornic_rx - the only tests made here are to devdata members that are altered
in service_resp_queue, and the visornic_rx is only called from
service_resp_queue, so you are guaranteed a stable data structure, as there is
only ever one context in service_resp_queue as its called from the napi poll
routine

service_resp_queue - Same as above, for any given queue, service_resp_queue only
has one context exectuing at once.

host_side_disappeared - only called from visornic_remove, when implies that all
associated devices are closed already, guaranteeing single access.

visornic_remove
visornic_resume - Both of these function only get called when all network
interfaces are quiesced.

just remove the lock and make the minor changes needed to guarantee isolated
access. It makes the code cleaner and faster

Neil


> > --
> > 1.9.1
>