Re: Scheduler regression: Too frequent timer interrupts(?)

From: Christoph Lameter
Date: Fri Apr 17 2009 - 14:27:56 EST


On Fri, 17 Apr 2009, Peter Zijlstra wrote:

> Something like this is nice to compare between kernels. Chris'
> suggestion of timing a simple fixed loop:
>
> $ time (let i=1000000; while [ $i -gt 0 ]; do let i--; done)
>
> real 0m14.389s
> user 0m13.787s
> sys 0m0.498s
>
> Is also useful, since it gives an absolute measure of time available to
> user-space.
>
> Although I suspect a simple C while(i--); might be better due to less
> code involved.

The absolute time available to user space is not that important. It is
important that the processor stays available during latency critical
operations and is not taken away by the OS. The intervals that the OS
takes the processor away determine the mininum interval that the
application has to react to events (f.e. RDMA transfers via Infiniband,
or operations on requests coming in via shared memory). These operations
often must occur in parallel on multiple cores. Processing is delayed if
any of the cores encounters a delay due to OS noise.

The latencytest code simulates a busy processor (no system calls, all
memory is prefaulted). For some reasons Linux is increasingly taking time
away from such processes (that intentionally run uncontended on a
dedicated processor). This causes regressions so that current upstream is
not usable for these applications.

It would be best for these applications if the processor would be left
undisturbed. There is likely not much that the OS needs to do on a busy
processor if there are no competing threads and if there is no I/O taking
place.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/