Re: CONFIG_IRQBALANCE for AMD64?

From: Martin J. Bligh
Date: Fri May 28 2004 - 13:59:46 EST


>> Here's my start at a list ... I'm sure it's woefully incomplete.
>>
>> 1. Utilize all CPUs roughly evenly for IRQ processing load (anything that's
>> not measured by the scheduler at least, because it's unfair to other
>> processes).
>
> yep; irqbalance approximates irq processing load by irq count, which seems
> to be ok-ish so far.

Isn't that exactly what the in-kernel one does though? which people were
complaining about (wrt network backend (softirq?) processing)? And some interrupts are much heavier than others, surely?

>> Also, we may well have more than 1 CPU's worth of traffic to
>> process in a large network server.
>
> One NIC? I've yet to see that ;)

No, multiple NICs. but if I shove 8 gigabit adaptors in a machine, we need
more than 1 cpu to process it ...

>> 2. Provide some sort of cache-affinity for network interrupt processing,
>> which also helps us not get into out-of-order packet situations.
>
> yep; irqbalance does that
>
>> 3. Utilize idle CPUs where possible to shoulder the load.
>
> this is in direct conflict with 2; esp since cpus are idle very short times
> all the time in busy scenarios (and non-busy scenarios are boring wrt irq
> loadf ;)

Mmmm. for benchmarking scenarios, maybe ... but I do believe than machines
aren't maxed out all the time. definitely a problem to determine how long
the idle interval is though. Past history might be a clue, but ... yes, not
easy.

>> 4. Provide such a solution for all architectures.
>
> irqbalanced in principle arch independent since the /proc interface is quite
> generic..

In principle either way *could* be arch independant ... though the in-kernel
one certainly isn't right now.

Is there actually any algorithmic difference between the user-space and
in-kernel ones? or is this just a philosophical debate about user vs kernel
placement of code? ;-)

M.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/