The 'we interrupt user-space' case got 7 cycles slower, but that latency
is dominated by entry costs anyway. The 'we interrupt kernel-space'
latency is untouched. (it might even got slightly faster due to the
backwards-branch).
Alternatively we could sti the reschedule path, but this is again
unnecessary for the 'normal' (non-interrupt) path. We could also split the
IRQ and syscall path like done earlier ...
the patch works just fine here.
-- mingo
--- linux/arch/i386/kernel/entry.S.orig Mon Mar 1 13:24:53 1999
+++ linux/arch/i386/kernel/entry.S Mon Mar 1 14:27:22 1999
@@ -193,7 +193,6 @@
ALIGN
signal_return:
- sti # we can get here from an interrupt handler
testl $(VM_MASK),EFLAGS(%esp)
movl %esp,%eax
jne v86_signal_return
@@ -233,8 +232,12 @@
movl EFLAGS(%esp),%eax # mix EFLAGS and CS
movb CS(%esp),%al
testl $(VM_MASK | 3),%eax # return to VM86 mode or non-supervisor?
- jne ret_with_reschedule
- jmp restore_all
+ je restore_all
+ sti # at this point we know we are doing
+ # a 'slow' return, possibly executing
+ # complex kernel code. Enable interrupts
+ # unconditionally.
+ jmp ret_with_reschedule
ALIGN
handle_bottom_half:
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/