Re: Semaphore assembly-code bug

From: linux-os
Date: Mon Nov 01 2004 - 23:46:46 EST


On Mon, 1 Nov 2004, dean gaudet wrote:

On Sun, 31 Oct 2004, linux-os wrote:

Timer overhead = 88 CPU clocks
push 3, pop 3 = 12 CPU clocks
push 3, pop 2 = 12 CPU clocks
push 3, pop 1 = 12 CPU clocks
push 3, pop none using ADD = 8 CPU clocks
push 3, pop none using LEA = 8 CPU clocks
push 3, pop into same register = 12 CPU clocks

your microbenchmark makes assumptions about rdtsc which haven't been valid since the days of the 486. rdtsc has serializing aspects and overhead that you can't just eliminate by running it in a tight loop and subtracting out that "overhead".


Wrong.

(1) The '486 didn't have the rdtsc instruction.
(2) There are no 'serializing' or other black-magic aspects of
using the internal cycle-counter. That's exactly how you you
can benchmark the execution time of accessible code sequences.

you have to run your inner loops at least a few thousand of times between rdtsc invocations and divide it out to find out the average cost in order to eliminate the problems associated with rdtsc.

-dean


You never average the cycle-time. The cycle-time is absolute.
You need to remove the affect of interrupts when you measure
performance so you need to sample a few times and save the
lowest number. That's the number obtained during the testing interval,
was not interrupted.

The provided code allows you to experiment. You can set the
TRIES count to 1. You will find that the results are noisy if
you are connected to an active network. Good results can be
obtained with it set to 4 if your computer is not being blasted
with lots of broadcast packets from M$ servers.



Of course you are not really interested in learning anything
about this are you?

Cheers,
Dick Johnson
Penguin : Linux version 2.6.9 on an i686 machine (5537.79 BogoMips).
Notice : All mail here is now cached for review by John Ashcroft.
98.36% of all statistics are fiction.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/