some questions using rdtsc in user space

From: Alexandre P. Nunes (
Date: Fri Aug 02 2002 - 12:08:51 EST


Both I and a friend have with an interesting scenario, maybe someone can
help us.

We have to access a device connected to parallel port, which works in
the following way: you send a byte to the port, to turn some bits on
(reflecting on some pins on the parallel port), which is interpreted by
the device as a command. Then you are supposed to sleep about ~200ns
(maybe more, just can't be much less), and then you send a byte which is
received by the device as data, pertinent to command.

We wrote a program which accomplishes this by doing outb() to
appropriate address(es), followed by usleep(1), but that seems to take
about 10 ms at average or so, which is far from good for our application.

I read somewhere that putting the process in real-time priority could
lead the average to 2ms, but I had this though that I could solve this
by using rdtsc instruction, because as far as I know it won't cause a
trap to kernel mode, which maybe expensive, am I right?

I don't have the need to use real time linux (though I'm considering
real-time priority), nor desperate time precision needs, what I don't
want is to have huge delays. I cannot relay on the low-latency patches
too, if possible (though I know it could help), because the program
will eventually run on standard kernels.

If using rdtsc is a good way, someone knows how do I do some sort of
loop, converting the rdtsc difference (is it in cpu clocks, right?) to
nano/microseconds, and if there could be bad behaviour from this (I
believe there could be some SMP issues, but for now this is irrelavant
for us).



P.S..: carbon-copy me, since I'm not subscribed to the list.

To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to
More majordomo info at
Please read the FAQ at

This archive was generated by hypermail 2b29 : Wed Aug 07 2002 - 22:00:19 EST