Re: context switch vs. signal delivery [was: Re: Accelerating user mode linux]

From: Linus Torvalds (
Date: Mon Aug 05 2002 - 00:35:13 EST

In article <>,
Andi Kleen <> wrote:
>Ingo Molnar <> writes:
>> actually the opposite is true, on a 2.2 GHz P4:
>> $ ./lat_sig catch
>> Signal handler overhead: 3.091 microseconds
>> $ ./lat_ctx -s 0 2
>> 2 0.90
>> ie. *process to process* context switches are 3.4 times faster than signal
>> delivery. Ie. we can switch to a helper thread and back, and still be
>> faster than a *single* signal.
>This is because the signal save/restore does a lot of unnecessary stuff.
>One optimization I implemented at one time was adding a SA_NOFP signal
>bit that told the kernel that the signal handler did not intend
>to modify floating point state (few signal handlers need FP) It would
>not save the FPU state then and reached quite some speedup in signal
>Linux got a lot slower in signal delivery when the SSE2 support was
>added. That got this speed back.

This will break _horribly_ when (if) glibc starts using SSE2 for things
like memcpy() etc.

I agree that it is really sad that we have to save/restore FP on
signals, but I think it's unavoidable. Your hack may work for you, but
it just gets really dangerous in general. having signals randomly
subtly corrupt some SSE2 state just because the signal handler uses
something like memcpy (without even realizing that that could lead to
trouble) is bad, bad, bad.

In other words, "not intending to" does not imply "will not". It's just
potentially too easy to change SSE2 state by mistake.

And yes, this signal handler thing is clearly visible on benchmarks.
MUCH too clearly visible. I just didn't see any safe alternatives
(and I still don't ;( )

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:26 EST