That sounds a little defeatist ;)
Having seen the insides of a telephone switch, I always feel a little
funny looking at how loose Unix-like OSes do things. The switch guys
don't like dynamic allocation or even hashing since they are not
robust in a RT situation.
I understand that general system like Linux will always tend to
place performance and portability before the specialized requirements
of ultra-reliable or ultra-secure systems. Still, there are a few
things that can be done. A few questions that come to mind:
- Is there a theorem prover to determine the maximum stack usage possible,
given an entry point in the kernel? If so, does anyone ever run it?
- Interrupts piggyback on the current kernel stack. Is there any value
to switching to a previously-allocated stack at entry to an ISR? Ie.,
each ISR has its own stack so that you don't have to allocate sufficient
room on *each* process's kernel stack to handle the possibility of
*all* possible interrupts nesting on those stacks.
- I find the problem with fork going flaky to be, in some sense, worse
than a hard crash, especially on an unattended, unmonitored server.
Is it possible that a little auditing could reduce the stack back to
one page?
Kip
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu