RE: Benchmarks - 1.3.11

Steve Martin (
21 Jul 1995 15:31:07 -0700

>> *******
>> Results
>> *******
>> Pipe-based Context Switching Test || 3007.4 -> 4804.9 +59.77%
>> Pipe Throughput Test || 18574.4 -> 18774.0 +1.07%
>> Execl Throughput Test || 60.9 -> 61.4 +0.82%
>[stuff deleted]

>Do these numbers actually mean anything? You run one of these
>benchmarks every single time a kernel comes out, there is always
>minute differences in the numbers, but nothinging changes in the
>kernel except long --> _u32 or networking fixes or such. How does that
>explain all the differences in your numbers? Do you just run the test
>when you feel like it without reguard to what is swapped in or out? Do
>you neglect to flush the disk cache? Are you the only user online when
>you run them? 1.3.11 is an exception if you install that experimental
>patch linus gave, but for all the others I have to wonder why your
>tests show a variation when there is no kernel changes to support

Hmm, I wouldn't call a 59.77% change 'minute.' In general, I have discounted
anything that is less than 1% change as being noise. Probably impacted by
bdflush and update doing cache syncing in the background. The interesting
stuff is when the differences go over 10%. I am often curious about what
changes were made in the kernel that caused such a big change in per-

Since I did read how Carlo is doing his benchmarks, I will say that I find
them very interesting, and perhaps useful in seeing how the new kernel
is being improved. I will let Carlo explain his process.