Re: 3.0 wishlist Was: Overview of 2.2.x goals?

Trond Eivind =?ISO-8859-1?Q?Glomsr=F8d?= (teg@pvv.ntnu.no)
25 Jan 1998 10:22:26 +0100


Kamran <kamran@wallybox.cei.net> writes:

> > . same programming model no mater what your environment
>
> I believe they have taken a worst case approach by adopting a message
> passing programming interface everywhere.

That isn't "worst case". MPI is nice to work with...

> > . ease of use
>
> You should be an old hand in message passing systems. Message passing is not
> what most people feel comfortable with.

For distributed numerical computing - yes.

> Global variables and stack variables are the means
> of communication between different parts of ordinary programs, and these are
> shared memories. Using multiple threads or processes is just a step away.
> Message passing can only be easier for people who are used to it.

I've programmed pthreads. I've programmed MPI. MPI is simpler, easier
and cleaner.

> >: *) Programs using DIPC can be run in a single computer, even on Linux
> >: kernels without DIPC support! There is no need to modify and compile
> >: the sources to achieve this.
> >
> >Ditto for MPI.
>
> Can you run an MPI application without first installing MPI ??

You need it installed where you compile the app, but link with it
staticaly (not much prob, since at least mpich builds statically by
default anyway). You don't need something to run on any of the computers.

> >: *) You don't have to learn some new programming interfaces.
> >
> >Yes you do - you have to learn shared memory. Which is much, much harder for
> >people to grasp then you might think.

Agreed.

Many parallell computers have supported this feature (Cray T3E and
Intel Paragon springs to mind ;). It's not used. People _prefer_ using
MPI - it's a clean interface for what you're doing.

-- 
Trond Eivind Glomsrød
http://s9412a.steinan.ntnu.no/~teg/ ** teg@pvv.ntnu.no