Re: [test-PATCH] Re: [QUESTION] 2.4.x nice level

From: george anzinger (george@mvista.com)
Date: Wed Apr 11 2001 - 11:27:16 EST


One rule of optimization is to move any code you can outside the loop.
Why isn't the nice_to_ticks calculation done when nice is changed
instead of EVERY recalc.? I guess another way to ask this is, who needs
to see the original nice? Would it be worth another task_struct entry
to move this calculation out of the loop?

George

Rik van Riel wrote:
>
> On Tue, 10 Apr 2001, Rik van Riel wrote:
>
> > I'll try to come up with a recalculation change that will make
> > this thing behave better, while still retaining the short time
> > slices for multiple normal-priority tasks and the cache footprint
> > schedule() and friends currently have...
>
> OK, here it is. It's nothing like montavista's singing-dancing
> scheduler patch that does all, just a really minimal change that
> should stretch the nice levels to yield the following CPU usage:
>
> Nice 0 5 10 15 19
> %CPU 100 56 25 6 1
>
> Note that the code doesn't change the actual scheduling code,
> just the recalculation. Care has also been taken to not increase
> the cache footprint of the scheduling and recalculation code.
>
> I'd love to hear some test results from people who are interested
> in wider nice levels. How does this run on your system? Can you
> trigger bad behaviour in any way?
>
> regards,
>
> Rik
> --
> Virtual memory is like a game you can't win;
> However, without VM there's truly nothing to lose...
>
> http://www.surriel.com/
> http://www.conectiva.com/ http://distro.conectiva.com.br/
>
> --- linux-2.4.3-ac4/kernel/sched.c.orig Tue Apr 10 21:04:06 2001
> +++ linux-2.4.3-ac4/kernel/sched.c Wed Apr 11 06:18:46 2001
> @@ -686,8 +686,26 @@
> struct task_struct *p;
> spin_unlock_irq(&runqueue_lock);
> read_lock(&tasklist_lock);
> - for_each_task(p)
> + for_each_task(p) {
> + if (p->nice <= 0) {
> + /* The normal case... */
> p->counter = (p->counter >> 1) + NICE_TO_TICKS(p->nice);
> + } else {
> + /*
> + * Niced tasks get less CPU less often, leading to
> + * the following distribution of CPU time:
> + *
> + * Nice 0 5 10 15 19
> + * %CPU 100 56 25 6 1
> + */
> + short prio = 20 - p->nice;
> + p->nice_calc += prio;
> + if (p->nice_calc >= 20) {
> + p->nice_calc -= 20;
> + p->counter = (p->counter >> 1) + NICE_TO_TICKS(p->nice);
> + }
> + }
> + }
> read_unlock(&tasklist_lock);
> spin_lock_irq(&runqueue_lock);
> }
> --- linux-2.4.3-ac4/include/linux/sched.h.orig Tue Apr 10 21:04:13 2001
> +++ linux-2.4.3-ac4/include/linux/sched.h Wed Apr 11 06:26:47 2001
> @@ -303,7 +303,8 @@
> * the goodness() loop in schedule().
> */
> long counter;
> - long nice;
> + short nice_calc;
> + short nice;
> unsigned long policy;
> struct mm_struct *mm;
> int has_cpu, processor;
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/



This archive was generated by hypermail 2b29 : Sun Apr 15 2001 - 21:00:16 EST