* Bill Davidsen <davidsen@xxxxxxx> wrote:Added to the short to-do list. Note that this was originally simply a check to see which IPC works best (or at all) in an o/s. It has been useful for some other things, and an option for work will be forthcoming.
I have posted the results of my initial testing, measuring IPC rates using various schedulers under no load, limited nice load, and heavy load at nice 0.
http://www.tmr.com/~davidsen/ctxbench_testing.html
nice! For this to become really representative though i'd like to ask for a real workload function to be used after the task gets the lock/message. The reason is that there is an inherent balancing conflict in this area: should the scheduler 'spread' tasks to other CPUs or not? In general, for all workloads that matter, the answer is almost always: 'yes, it should'.
But in your ctxbench results the work a task performs after doing IPC is not reflected (the benchmark goes about to do the next IPC - hence penalizing scheduling strategies that move tasks to other CPUs) - hence the bonus of a scheduler properly spreading out tasks is not measured fairly. A real-life IPC workload is rarely just about messaging around (a single task could do that itself) - some real workload function is used. You can see this effect yourself: do a "taskset -p 01 $$" before running ctxbench and you'll see the numbers improve significantly on all of the schedulers.Can do.
As a solution i'd suggest to add a workload function with a 100 or 200 usecs (or larger) cost (as a fixed-length loop or something like that) so that the 'spreading' effect/benefit gets measured fairly too.