Re: [patch] revert: [NET]: Fix races in net_rx_action vs netpoll

From: Kok, Auke
Date: Thu Jul 19 2007 - 11:43:17 EST


Olaf Kirch wrote:
On Thursday 19 July 2007 12:58, Ingo Molnar wrote:
i.e. it's the classic 'eth0 got stuck somehow' tx/rx state machine hickup symptoms, with no other bad symptoms such as lockups or crashes.

Duh, I found it.

The e1000 poll routine does this to leave polling mode.

netif_rx_complete(poll_dev);
e1000_irq_enable(adapter);
return 0;

Which looks innocent enough, except that e1000_irq_enable has
this little irq_sem counter:

if (likely(atomic_dec_and_test(&adapter->irq_sem))) {
E1000_WRITE_REG(&adapter->hw, IMS, IMS_ENABLE_MASK);
E1000_WRITE_FLUSH(&adapter->hw);
}

So as poll_napi calls the poll() routine repeatedly, the irq_sem
counter is decremented by one each time. During the first call,
it re-enables the interrupt. During the next calls, irq_sem goes
negative.

Then an interrupt comes in, e1000_intr disables the interrupt,
increments irq_sem by one, and schedules the device for rx_action.
rx_action calls dev->poll(), which finishes cleaning rx/rx rings,
and when it finds there's no more work, it calls rx_complete and
irq_enable. Except irq_enable doesn't enable anything now, since
irq_sem is <= 0, and dec_and_test returns false.

The whole irq_sem accounting in the e1000 does not rhyme well with
netpoll's way of exercising dev->poll().

it's been accused of worse things ;)

The reason my patch triggers
the problem reliably for you is that now, we always get at least
two invocations of dev->poll: once from poll_napi - where we do not
remove the device from the poll list any longer - and another one
from net_rx_action.

I don't have a fix ready yet - I hope I'll have something later
this afternoon.

interesting, you seem to found the cause allright. I can't confirm the problem but I know that netpoll and NAPI has historically been an issue. I look forward to your suggestions...

Cheers,

Auke
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/