Re: context switch vs. signal delivery [was: Re: Accelerating user mode linux]

From: Ingo Molnar (mingo@elte.hu)
Date: Mon Aug 05 2002 - 05:40:54 EST


On 4 Aug 2002, Andi Kleen wrote:

> > actually the opposite is true, on a 2.2 GHz P4:
> >
> > $ ./lat_sig catch
> > Signal handler overhead: 3.091 microseconds
> >
> > $ ./lat_ctx -s 0 2
> > 2 0.90
> >
> > ie. *process to process* context switches are 3.4 times faster than signal
> > delivery. Ie. we can switch to a helper thread and back, and still be
> > faster than a *single* signal.
>
> This is because the signal save/restore does a lot of unnecessary stuff.
> One optimization I implemented at one time was adding a SA_NOFP signal
> bit that told the kernel that the signal handler did not intend to
> modify floating point state (few signal handlers need FP) It would not
> save the FPU state then and reached quite some speedup in signal
> latency.

well, we have an optimization in this area already - if the thread
receiving the signal has not used any FPU registers during its current
scheduled atom yet then we do not save the FPU state into the signal
frame.

lat_sig uses the FPU so this cost is added. If the FPU saving cost is
removed then signal delivery latency is still 2.0 usecs - slightly more
than twice as expensive as a context-switch - so it's not a win. And
threads can do queued events that amortizes context switch overhead, while
queued signals generate per-event signal delivery, so signal delivery
costs are not amortized.

(Not that i advocate SIGIO or helper threads for highperformance IO -
Ben's aio interface is the fastest and most correct approach.)

        Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:27 EST