No, it works like this:
1. Write code in C.
2. Have gcc generate assembly. (your C will not beat this)
3. Try to improve the assembly.
When the compiler improves, you can go back to step 1 if needed.
It doesn't seem to happen often enough to make assembly a waste.
> True, you can always beat a specific algorithm in c by implementing it in
> assembly (or at least be on par), but that does _not_ imply that C will not
> run faster then assembly, which, to the contrary, is often the case.
If the C runs faster, the assembly was badly done or is very obsolete.
>> Remember that a performance advantage worth 1 year of hardware
>> improvement won't just disappear after a year.
>
> Wrong. The performance improvement of implementing software in assembly
> in the 386 days has completely vanished, and has even turned into a
> loss of performance since compilers produce better code for pentium
> (etc.) then i386 assembly programmers.
>
> (if you are concerned about the timespan, take 486 programmers and
> pentium compilers)
pgcc took how long to be written?
I think the Pentium Pro was out before gcc was hacked to optimize
for the Pentium. There was plenty of time to write Pentium-optimized
assembly while gcc was doing i386 and i486 optimizations only.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/