Re: [RFC 11/14] x86: add support for Dynamic Kernel Stacks

From: Andy Lutomirski
Date: Mon Mar 11 2024 - 19:34:46 EST


On Mon, Mar 11, 2024, at 4:10 PM, Pasha Tatashin wrote:
> On Mon, Mar 11, 2024 at 6:17 PM Andy Lutomirski <luto@xxxxxxxxxx> wrote:
>>
>>
>>
>> On Mon, Mar 11, 2024, at 9:46 AM, Pasha Tatashin wrote:
>> > Add dynamic_stack_fault() calls to the kernel faults, and also declare
>> > HAVE_ARCH_DYNAMIC_STACK = y, so that dynamic kernel stacks can be
>> > enabled on x86 architecture.
>> >
>> > Signed-off-by: Pasha Tatashin <pasha.tatashin@xxxxxxxxxx>
>> > ---
>> > arch/x86/Kconfig | 1 +
>> > arch/x86/kernel/traps.c | 3 +++
>> > arch/x86/mm/fault.c | 3 +++
>> > 3 files changed, 7 insertions(+)
>> >
>> > diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> > index 5edec175b9bf..9bb0da3110fa 100644
>> > --- a/arch/x86/Kconfig
>> > +++ b/arch/x86/Kconfig
>> > @@ -197,6 +197,7 @@ config X86
>> > select HAVE_ARCH_USERFAULTFD_WP if X86_64 && USERFAULTFD
>> > select HAVE_ARCH_USERFAULTFD_MINOR if X86_64 && USERFAULTFD
>> > select HAVE_ARCH_VMAP_STACK if X86_64
>> > + select HAVE_ARCH_DYNAMIC_STACK if X86_64
>> > select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
>> > select HAVE_ARCH_WITHIN_STACK_FRAMES
>> > select HAVE_ASM_MODVERSIONS
>> > diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
>> > index c3b2f863acf0..cc05401e729f 100644
>> > --- a/arch/x86/kernel/traps.c
>> > +++ b/arch/x86/kernel/traps.c
>> > @@ -413,6 +413,9 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
>> > }
>> > #endif
>> >
>> > + if (dynamic_stack_fault(current, address))
>> > + return;
>> > +
>>
>> Sorry, but no, you can't necessarily do this. I say this as the person who write this code, and I justified my code on the basis that we are not recovering -- we're jumping out to a different context, and we won't crash if the origin context for the fault is corrupt. The SDM is really quite unambiguous about it: we're in an "abort" context, and returning is not allowed. And I this may well be is the real deal -- the microcode does not promise to have the return frame and the actual faulting context matched up here, and there's is no architectural guarantee that returning will do the right thing.
>>
>> Now we do have some history of getting a special exception, e.g. for espfix64. But espfix64 is a very special case, and the situation you're looking at is very general. So unless Intel and AMD are both wiling to publicly document that it's okay to handle stack overflow, where any instruction in the ISA may have caused the overflow, like this, then we're not going to do it.
>
> Hi Andy,
>
> Thank you for the insightful feedback.
>
> I'm somewhat confused about why we end up in exc_double_fault() in the
> first place. My initial assumption was that dynamic_stack_fault()
> would only be needed within do_kern_addr_fault(). However, while
> testing in QEMU, I found that when using memset() on a stack variable,
> code like this:
>
> rep stos %rax,%es:(%rdi)
>
> causes a double fault instead of a regular fault. I added it to
> exc_double_fault() as a result, but I'm curious if you have any
> insights into why this behavior occurs.
>

Imagine you're a CPU running kernel code, on a fairly traditional architecture like x86. The code tries to access some swapped out user memory. You say "sorry, that memory is not present" and generate a page fault. You save the current state *to the stack* and chance the program counter to point to the page fault handler. The page fault handler does its thing, then pops the old state off the stack and resumes the faulting code.

A few microseconds later, the kernel fills up its stack and then does:

PUSH something

but that would write to a not-present stack page, because you already filled the stack. Okay, a page fault -- no big deal, we know how to handle that. So you push the current state to the stack. On wait, you *can't* push the current state to the stack, because that would involve writing to an unmapped page of memory.

So you trigger a double-fault. You push some state to the double-fault handler's special emergency stack. But wait, *what* state do you push? Is it the state that did the "PUSH something" and overflowed the stack? Or is some virtual state that's a mixture of that and the failed page fault handler? What if the stack wasn't quite full and you actually succeeded in pushing the old stack pointer but not the old program counter? What saved state goes where?

This is a complicated mess, so the people who designed all this said 'hey, wait a minute, let's not call double faults a "fault" -- let's call them an "abort"' so we can stop confusing ourselves and ship CPUs to customers. And "abort" means "the saved state is not well defined -- don't rely on it having any particular meaning".

So, until a few years ago, we would just print something like "PANIC: double fault" and kill the whole system. A few years ago, I decided this was lame, and I wanted to have stack guard pages, so i added real fancy new logic: instead, we do our best to display the old state, but it's a guess and all we're doing with it is printk -- if it's wrong, it's annoying, but that's all. And then we kill the running thread -- instead of trying to return (and violating our sacred contract with the x86 architecture), we *reset* the current crashing thread's state to a known-good state. Then we return to *that* state. Now we're off the emergency stack and we're running something resembling normal kernel code, but we can't return, as there is nowhere to return to. But that's fine -- instead we kill the current thread, kind of like _exit(). That never returns, so it's okay that we can't return.

But your patch adds a return statement to this whole mess, which will return to the moderately-likely-to-be-corrupt state that caused a double fault inside the microcode for the page fault path, and you have stepped outside the well-defined path in the x86 architecture, and you've triggered something akin to Undefined Behavior. The CPU won't catch fire, but it reserves the right to execute from an incorrect RSP and/or RIP, to be in the middle of an instruction, etc.

(For that matter, what if there was exactly enough room to enter the page fault handler, but the very first instruction of the page fault handler overflowed the stack? Then you allocate more memory, get lucky and successfully resume the page fault handler, and then promptly OOPS because you run the page fault handler and it thinks you got a kernel page fault? My OOPS code handles that, but, again, it's not trying to recover.)

>> There are some other options: you could pre-map
>
> Pre-mapping would be expensive. It would mean pre-mapping the dynamic
> pages for every scheduled thread, and we'd still need to check the
> access bit every time a thread leaves the CPU.

That's a write to four consecutive words in memory, with no locking required.

> Dynamic thread faults
> should be considered rare events and thus shouldn't significantly
> affect the performance of normal context switch operations. With 8K
> stacks, we might encounter only 0.00001% of stacks requiring an extra
> page, and even fewer needing 16K.

Well yes, but if you crash 0.0001% of the time due to the microcode not liking you, you lose. :)

>
>> Also, I think the whole memory allocation concept in this whole series is a bit odd. Fundamentally, we *can't* block on these stack faults -- we may be in a context where blocking will deadlock. We may be in the page allocator. Panicing due to kernel stack allocation would be very unpleasant.
>
> We never block during handling stack faults. There's a per-CPU page
> pool, guaranteeing availability for the faulting thread. The thread
> simply takes pages from this per-CPU data structure and refills the
> pool when leaving the CPU. The faulting routine is efficient,
> requiring a fixed number of loads without any locks, stalling, or even
> cmpxchg operations.

You can't block when scheduling, either. What if you can't refill the pool?