Re: Intel microcode fixes [OFF-TOPIC]

doctor@fruitbat.org
Sat, 21 Nov 1998 22:58:11 -0800 (PST)


ralf@uni-koblenz.de stated ...
>
> On Thu, Nov 19, 1998 at 02:09:07PM -0800, doctor@fruitbat.org wrote:
>
> > Why? This technology has existed on mainframes for years. The really
> > neat aspect to this is that you can design your own instructions!
> > Microprocessors today are getting so complex that they start to take on
> > some of the attributes of mainframe processors. Why not an upgradable
> > processor? Think of how the Intel Floating Point Bug would have been a
> > non-issue if Pentiums were microcode upgradable? Why not extent the idea
> > and have a general-purpose micro-processor archtecture which you can
> > upload whichever instruction set you wished? Say you feel like running
> > 68040 based software today? A quick Zap and your set! Hey, how about
> > running some of those old Apple ][ programs? Granted there are other
> > issues (bus access, interrupts, etc) which would have to be hammered out,
> > but the potential is incredible!
>
> Microcode upgrade are not a perfect solution for everything. In those
> CPUs that still use microcode it's role is getting less and less important.
> Simple example, old CPUs had multiply instructions implemented as you would
> have done it on for example a 6502 in software. Newer designs came up
> with special microinstructions supporting multiplies by 32 x 2 bit multiplies
> for example and that microinstructions were used to implement the multiply
> instruction. Some RISCs have such instructions as well. Finally, the entire
> multiplier was put into hardware, microcode completly gone away.

I'm quite aware of the evolution of processor architecture and
implementation and agree that the trend has been to move away from
microcoded instructions toward hardwired instructions. But to any kind
of the speed out of it you have to add super-pipelines, complex
instruction interleaving and register management and a large cache to
account for the fact that your instruction flow is now about 3 times
larger than before. Suddenly your "lets get small" paradigm has balooned
into a very large and hungry beast.
As a side "benefit", your compiler has to be smarter about the ordering
of the instructions it generates. Don't get me wrong, all this makes for
a more optimal execution flow, time wise, but the volume of code you must
generate keeps expanding, thus requiring more cache, more main memory and
more disk space. This might be great for the memory and disk industry,
but for the poor consumer, it means replacing your machine every 3 years
(or is it 2 now...progress and all that ;-)

> See the pattern? Microcode was always used to fill the gap where silicon
> was for some reason not the solution, be it complexity of design, correctness
> or whatever issues. Microcode was used to provide an outer shell around a
> changing implementation.

I see the pattern, alright, and I'm not impressed with it. I believe
that smaller is better, but having to use many itty-bitty instructions to
construct the equivalent of a higher level construct is like having to
build a house with sand, one grain at a time.

> Your idea is completly orthogonal to the RISC idea, it's putting complex,
> rarely used things into silicon, not software where they should go.

Well, I've been called worst ;-) The whole issue of RISC versus CISC
has been hashed out, time and time again, in comp.arch. I suggest we all
drop this thread to let linux-kernel mail have the bandwidth.

> Ralf
>

-- 
Peter A. Castro (doctor@fruitbat.org) or (pcastro@us.oracle.com)

- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/