[patch] [sched] yield speedup, 2.5.3-pre5

From: Ingo Molnar (mingo@elte.hu)
Date: Mon Jan 28 2002 - 11:53:48 EST


the attached patch speeds up sched_yield() context switch performance by
about 5%. (This patch depends on the previous bitmap-cleanup patch,
functionality-wise.)

instead of dequeueing/enqueue the task from the same array we can skip a
number of steps even in the generic case. If a yielded task has reached
maximum priority (as they do if they yield a number of times) then we can
skip even more steps - the task only has to be queued to the end of the
runqueue.

        Ingo

diff -rNu linux/kernel/sched.c linux/kernel/sched.c
--- linux/kernel/sched.c Mon Jan 28 15:23:50 2002
+++ linux/kernel/sched.c Mon Jan 28 15:24:44 2002
@@ -1084,12 +1112,22 @@
          */
         spin_lock_irq(&rq->lock);
         array = current->array;
- dequeue_task(current, array);
- if (likely(!rt_task(current)))
- if (current->prio < MAX_PRIO-1)
- current->prio++;
- enqueue_task(current, array);
- spin_unlock_irq(&rq->lock);
+ /*
+ * If the task has reached maximum priority (or is a RT task)
+ * then just requeue the task to the end of the runqueue:
+ */
+ if (likely(current->prio == MAX_PRIO-1 || rt_task(current))) {
+ list_del(&current->run_list);
+ list_add_tail(&current->run_list, array->queue + current->prio);
+ } else {
+ list_del(&current->run_list);
+ if (list_empty(array->queue + current->prio))
+ __clear_bit(current->prio, array->bitmap);
+ current->prio++;
+ list_add_tail(&current->run_list, array->queue + current->prio);
+ __set_bit(current->prio, array->bitmap);
+ }
+ spin_unlock(&rq->lock);

         schedule();

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Thu Jan 31 2002 - 21:00:51 EST