Re: [PATCH] x86-64: Fix the nterrupt race is in idle callback (3rd try)

From: Venkatesh Pallipadi
Date: Wed Nov 29 2006 - 14:33:55 EST



Idle callbacks has some races when enter_idle() sets isidle and subsequent
interrupts that can happen on that CPU, before CPU goes to idle. Due to this,
an IDLE_END can get called before IDLE_START. To avoid these races, disable
interrupts before enter_idle and make sure that all idle routines do not
enable interrupts before entering idle.

Note that poll_idle() still has a this race as it has to enable interrupts
before going to idle. But, all other idle routines have the race fixed.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@xxxxxxxxx>

---

Here is the second try. Thanks to Suresh Siddha who pointed out a issue with
enabling interrupts in the first patch.

Well 2nd try did try to fix. But, did not really fix the issue in original
patch. Guess, today is not my day to send out patches.
Here is the 3rd try. Again thanks to Suresh for being the eagle eye...


Index: linux-2.6.19-rc-mm/arch/x86_64/kernel/process.c
===================================================================
--- linux-2.6.19-rc-mm.orig/arch/x86_64/kernel/process.c
+++ linux-2.6.19-rc-mm/arch/x86_64/kernel/process.c
@@ -127,6 +127,7 @@ static void default_idle(void)
*/
static void poll_idle (void)
{
+ local_irq_enable();
cpu_relax();
}

@@ -206,6 +207,12 @@ void cpu_idle (void)
idle = default_idle;
if (cpu_is_offline(smp_processor_id()))
play_dead();
+ /*
+ * Idle routines should keep interrupts disabled
+ * from here on, until they go to idle.
+ * Otherwise, idle callbacks can misfire.
+ */
+ local_irq_disable();
enter_idle();
idle();
/* In many cases the interrupt that ended idle
@@ -243,8 +250,16 @@ void mwait_idle_with_hints(unsigned long
/* Default MONITOR/MWAIT with no hints, used for default C1 state */
static void mwait_idle(void)
{
- local_irq_enable();
- mwait_idle_with_hints(0,0);
+ if (!need_resched()) {
+ __monitor((void *)&current_thread_info()->flags, 0, 0);
+ smp_mb();
+ if (!need_resched())
+ __sti_mwait(0, 0);
+ else
+ local_irq_enable();
+ } else {
+ local_irq_enable();
+ }
}

void __cpuinit select_idle_routine(const struct cpuinfo_x86 *c)
Index: linux-2.6.19-rc-mm/include/asm-x86_64/processor.h
===================================================================
--- linux-2.6.19-rc-mm.orig/include/asm-x86_64/processor.h
+++ linux-2.6.19-rc-mm/include/asm-x86_64/processor.h
@@ -475,6 +475,14 @@ static inline void __mwait(unsigned long
: :"a" (eax), "c" (ecx));
}

+static inline void __sti_mwait(unsigned long eax, unsigned long ecx)
+{
+ /* "mwait %eax,%ecx;" */
+ asm volatile(
+ "sti; .byte 0x0f,0x01,0xc9;"
+ : :"a" (eax), "c" (ecx));
+}
+
extern void mwait_idle_with_hints(unsigned long eax, unsigned long ecx);

#define stack_current() \
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/