Re: Remote fork() and Parallel Programming

mshar@vax.ipm.ac.ir
Thu, 18 Jun 1998 04:37:18 +0330


Hi,

<linker@nightshade.ml.org> wrote:

>If we agree that MPI is like assemble [...]

I think systems like PVM or MPI are more comparable to macro assemblers.
They are easier than using sockets and such, but they still offer a similar
programming model.

> [...] then perhaps we need to put alot
>more attention to making MPI faster. You make it faster on a single
>computer (like a SMP box) using shared memory. You make it faster on
>clusters through shortcutted network stacks and optimized protocols. You
>make it faster by bringing basic core functions into the kernel where
>needed. You make hand optimized ASM version for various cpus.
>
>Then you build DIPC on top of MPI.

DIPC is not built on top of MPI. It directly uses TCP/IP for its network
operations.

>Perhaps there needs to be a third alternitive, a more userfriendly wrapper
>on MPI.. Like C is on assembly.

Calling C a wrapper for assembly is a bit unfair. Yes, C allows the
programmer to do some lower-level operations and is not very strict in its
error checking, but much more improtant than that, it gives the programmer a
different view of the underlying hardware than assembly.

What you do depends on what you consider to be more user-friendly. Are you
happy with the message passing model of programming in MPI? If yes, then
any modifications done to MPI will will/should not change this programming
model. But if you think that a shared memory model is user friendlier
then it might result in better performance if the new system skipped MPI and
is built on top of lower level networking services.

One very improtant point to keep in mind is that, exactly like assembly and
C, shared memory and message passing systems are equivalent in programming
power:

1) You can simulate messages with shared memory. This works even in single-
CPU systems. Linux using the (shared) memroy of the computer to implement
System V messages is one example. Over a netwrok, no simulation is necessary.

2) And you can simulate shared memory with messages, as in distributed
shared memory. In a single PC, no simulation is necessary.

This means that anything you can do with the message passing model can be
done with the shared memory model, and vice versa. In practice, however, a
problem might be better expressed in only one of the two forms. One could
use the "unnatural" solution, but implementing it would be awkward,
inefficient, etc.

It is my belief that most programs can use shared memory in a very natural
way, but still any step to make MPI faster or user-friendlier will certainly
be of help to some people.

-Kamran Karimi

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu