On Mon, 24 Apr 2000, Andrew Morton wrote:
> I have done some measurements on this. The worst case interrupt
> blockage within the IDE code when using UDMA with DMA is 90 microseconds
> (500MHz CPU). When interrupts are unmasked this comes down to 32 usecs
> worst case. Compare this with the console code's 3,000 microseconds.
> More details at http://www.uow.edu.au/~andrewm/ide-intr.txt
> If you're concerned about latency you must knock off the peaks. With
> DMA, IDE is down in the noise floor. And that unmasking code is pretty
> scary stuff....
Interesting results indeed ..
Notice that I've been able to run RTC at 2048HZ (500usec per period),
using a SCHED_FIFO process on a kernel + lowlatency patches.
The maximum drift was about 500usec,
that meanx if you can save 60usecs on each run, you will gain about
12% in terms of CPU performance.
My example was a program which does heavy computations (using up to 80% of the
CPU) between each 500usec run.
That means without unmasking you might loose 10-15% in terms of CPU
Notice that this application is not an idiotic example.
Take for example a MIDI application, which wants to get/send events with a very
high degree of precision. You need timer events with 1-2kHz frequency in order
to keep the MIDI jitter at the minimum.
Run your preferred HD recorder in background (ardour doing 48channel IO :-) ),
and you are in the above case.
Not taking advantage of the unmask IRQ feature, is the same as preferring
a big spinlock instead of multiple finegrained ones.
In both cases the performance will be ok as long as there are no collisions,
but in the case of demanding applications, the first method will lose.
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to firstname.lastname@example.org
Please read the FAQ at http://www.tux.org/lkml/
This archive was generated by hypermail 2b29 : Sun Apr 30 2000 - 21:00:07 EST