Re: kernel stack torture

Linus Torvalds (Linus.Torvalds@cs.helsinki.fi)
Sat, 17 Feb 1996 10:34:22 +0200


Drew Eckhardt: "Re: kernel stack torture" (Feb 16, 14:46):
> In message <911566A761C@rkdvmks1.ngate.uni-regensburg.de>, Ulrich.Windl@rz.uni-
> regensburg.de writes:
> >
> >Wouldn't it be best to grwo the kernel stack instead of panic-ing? I
> >don't know it this is easy, but it sounds a good idea. One could also
> >watch how much kernel stack is actually used.

We can't grow the stack: if we get a page fault because the stack
doesn't exist, we can't handle that page fault because we don't have any
stack for the fault..

The x86 hardware can in theory handle this by having a page fault
handler that switches to another process with another stack, but that is
too slow for words. And you could maybe handle it by having a double
fault handler that switches to another process, but quite frankly I
doubt that would work out very well either.

(If I remember correctly, then double faults don't work correctly on
early 386's, and this would also complicate the stack handling a lot in
any case).

On a x86, you could in theory also use a special stack segment and catch
the stack segment faults, but that doesn't work with the way C expects
things (with stack pointers and normal data pointers being of the same
type). And again, it's unportable.

> Therefore, we can probably put the kernel stack in virtual memory,
> and grow it at will; keeping the changes entirely local to the VM
> code. changing anything too much pain.

It is way too painful to do in practice - you _really_ don't want to
have the stack disappear from under you in kernel mode and try to fix it
up with a fault handler (it's not likely to be even possible in theory
on all architectures).

> I'd also suggest keeping a printk() simillar to the panic message
> everytime a kernel stack grows beyond the largest one, since this is
> probably indicative of a design problem or bug.

Sure, something like this is probably a reasonable debugging aide. I'll
think about it,

Linus