Re: Benchmarks - 1.3.11

cjs (cjs@netcom.com)
Thu, 20 Jul 1995 17:32:41 -0500 (CDT)


> On Thu, 20 Jul 1995, cjs wrote:
>
> > > *******
> > > Results
> > > *******
> > >
> > > Pipe-based Context Switching Test || 3007.4 -> 4804.9 +59.77%
> > > Pipe Throughput Test || 18574.4 -> 18774.0 +1.07%
> > > Execl Throughput Test || 60.9 -> 61.4 +0.82%
> > [stuff deleted]
> >
> > Do these numbers actually mean anything? You run one of these
> > benchmarks every single time a kernel comes out, there is always
> > minute differences in the numbers, but nothinging changes in the
> > kernel except long --> _u32 or networking fixes or such. How does that
> > explain all the differences in your numbers? Do you just run the test
> > when you feel like it without reguard to what is swapped in or out? Do
> > you neglect to flush the disk cache? Are you the only user online when
> > you run them? 1.3.11 is an exception if you install that experimental
> > patch linus gave, but for all the others I have to wonder why your
> > tests show a variation when there is no kernel changes to support
> > them.
>
> Duh... What I do is, *always*, start the benchmark run (4 to 5 hours)
> after a fresh reboot. I login from the first multiterminal, start the
> daemon, check that it started OK, log out, and, generally, go to sleep.
>
> My machine is not on the net (except when I dial in). From MY point of
> view, I am being as impartial as possible.

Sounds to me like you are making an good effort to have simular
testing conditions every time. =)

> Each test is run 20 times (10 for the most time-consuming). If I find the
> time, I will add the calculation of the standard deviation.

I would be interested in seeing that. Its all fine and well to say
that task switching times or something has increased + or - 0.82% over
the previous version, however when there haven't been any changes that
could make a difference in such a thing, then you have to figre that
you are actually detecting something else.

Christopher