Re: new IRQ scalability changes in 2.3.48

From: Richard Henderson (rth@cygnus.com)
Date: Wed Mar 08 2000 - 20:55:10 EST


On Wed, Mar 08, 2000 at 06:02:26PM -0700, yodaiken@fsmlabs.com wrote:
> > if different IRQs are delivered to different CPUs, then there is no global
> > spinlock connection between them. Also see /proc/irq/*/smp_affinity. Eg.
>
> I've been thinking about this change and still don't see what good it
> does.
> On a UP -- no change except code is more complex
> On a SMP box performance loss without using affinity.
> Take two spinlocks instead of one, more cache boucing etc.

Actually, it does make sense on Alpha. We don't really take advantage
of it at the moment but --

We don't need any spinlocks at all.

Each cpu has its own interrupt mask register, so the actual interrupt
handler needn't worry about communicating with other processors.

For device startup and shutdown when we call enable_irq and disable_irq
and suchlike, we can avoid races with remote interrupt masking by using
an interprocessor interrupt. This is more heavy-weight than a spinlock,
but it ought to be much less frequent.

As for the affinity mask, the issue is that our core logics deliver an
interrupt to *all* cpus that have the interrupt unmasked. In contrast,
the x86 ioapic picks a single cpu to receive the interrupt. Thus we are
currently on alpha smp trying to handle the interrupt on all cpus
simultaneously, which is just stupid.

What we ought to do is initialize the affinity mask with some initial
static distribution, then let the sysadmin tweek the thing if they
want for performance with their actual interrupt load. It's way
easier than trying to do our own periodic load balancing, and better
than Digital Unix which just delivers everything to cpu 0.

r~

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Mar 15 2000 - 21:00:15 EST