Re: 2.6.23 WARNING: at kernel/softirq.c:139 local_bh_enable()

From: Matt Mackall
Date: Fri Nov 23 2007 - 14:46:42 EST


On Fri, Nov 23, 2007 at 10:32:22PM +0300, Evgeniy Polyakov wrote:
> On Fri, Nov 23, 2007 at 01:11:20PM -0600, Matt Mackall (mpm@xxxxxxxxxxx) wrote:
> > On Fri, Nov 23, 2007 at 09:59:06PM +0300, Evgeniy Polyakov wrote:
> > > On Fri, Nov 23, 2007 at 09:51:01PM +0300, Evgeniy Polyakov (johnpol@xxxxxxxxxxx) wrote:
> > > > On Fri, Nov 23, 2007 at 09:48:51PM +0300, Evgeniy Polyakov (johnpol@xxxxxxxxxxx) wrote:
> > > > > Stop, we are trying to free skb without destructor and catch connection
> > > > > tracking, so it is not a solution. To fix the problem we need to check
> > > > > if it is not netfilter related, kind of this (not tested), Simon please
> > > > > give it a try:
> > > >
> > > > And to be really cool we need to bypass skbs with xfrm attached, since
> > > > its freeing also assumes BH context.
> > >
> > > What about compile options?
> >
> > What about my original suggestion that we mark skbs owned by netpoll
> > and free only those. Much safer, no? Untested:
>
> This should work if there are netpoll's skbs, but if we are under memory
> pressure we want to free not only netpoll skbs, but at least one, and
> what if there are no netpoll skbs in the queue?

Yeah, that's a concern (but note that we do have a private reserve and
we only really need the zap when our reserve is depleted). But I worry
that it's too fragile and if we add a new unsafe case, it won't be
noticed for a long time. This is the first report I've seen of this
particular problem, so this has been a latent bug for three or four
years now.

Here's another thought: move all this logic into the networking core,
unify it with current softirq zapper, then allow it to be called from
various other places (like atomic allocators). Then it'll all be in
central maintained place with more users.

--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/