>> > Thats broken, but for completely different reasons. Its
>> disabling interrupts > when the skb_* stuff is now spinlocked.
>>
>> Alan, look again. Only the non __ versions of skb_* call the
>> spinlocks.
Alan> Yes yes I know that - what I mean is the
Alan> cli() __skb_blah sti()
Alan> stuff is _extremely_ slow in 2.1 in comparison to the non __
Alan> version
I think I fixed this a long time ago in my tree ... I was just not
sure it was 100% correct and forgot about it.
However, several 100GB's of data transfers later I still haven't seen
it cause any problems, so try this patch (or tell me why its wrong
;-):
Jes
--- vger/net/core/dev.c Thu Jul 9 12:05:06 1998
+++ linux/net/core/dev.c Wed Jul 15 11:38:47 1998
@@ -809,18 +809,13 @@
while (!skb_queue_empty(&backlog))
{
- struct sk_buff * skb = backlog.next;
+ struct sk_buff * skb;
/* Give chance to other bottom halves to run */
if (jiffies - start_time > 1)
goto net_bh_break;
- /*
- * We have a packet. Therefore the queue has shrunk
- */
- cli();
- __skb_unlink(skb, &backlog);
- sti();
+ skb = skb_dequeue(&backlog);
#ifdef CONFIG_CPU_IS_SLOW
if (ave_busy > 128*16) {
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html