Re: [RFC] genirq: Flush the irq thread on synchronization

From: Thomas Gleixner
Date: Wed Feb 15 2012 - 09:34:33 EST


On Mon, 13 Feb 2012, Ido Yariv wrote:

> Hi Thomas,
>
> On Sun, Dec 04, 2011 at 09:09:32PM +0200, Ido Yariv wrote:
> > Hi Thomas,
> >
> > On Sat, Dec 03, 2011 at 12:21:46AM +0100, Thomas Gleixner wrote:
> > > I can see your problem, but this might lead to threads_active leaks
> > > under certain conditions. desc->threads_active was only meant to deal
> > > with shared interrupts.
> > >
> > > We explicitely allow a design where the primary handler can leave the
> > > device interrupt enabled and allow further interrupts to occur while
> > > the handler is running. We only have a single bit to note that the
> > > thread should run, but your wakeup would up the threads_active count
> > > in that scenario several times w/o a counterpart which decrements it.
> > >
> > > The solution for this is to keep the current threads_active semantics
> > > and make the wait function different. Instead of waiting for
> > > threads_active to become 0 it should wait for threads_active == 0 and
> > > the IRQTF_RUNTHREAD for all actions to be cleared. To avoid looping
> > > over the actions, we can take a similar approach as we take with the
> > > desc->threads_oneshot bitfield.
> >
> > Thanks for reviewing this.
> >
> > I might be missing something, but I don't see any potential
> > threads_active leaks in this approach. We wont increase threads_active
> > if IRQTF_RUNTHREAD was already set beforehand (as test_and_set_bit()
> > will return 1).
> >
> > If irq_wake_thread is called multiple times before irq_thread has had a
> > chance to run, threads_active will only be increased once and decreased
> > back when IRQTF_RUNTHREAD is cleared.
> >
> > Am I missing something? If not, do you see any other issues with this
> > implementation?
>
> Any thoughts on this?

Sorry, went of my radar. Will have a look later today.

Thanks,

tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/