Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Brian Barrett (brbarret_at_[hidden])
Date: 2006-04-16 14:12:09


On Apr 16, 2006, at 8:18 AM, Sang Chul Choi wrote:

> 1. I could not find any document except FAQ and mailing list
> for Open MPI. Is there any user manual or something like that?
> Or, the LAM MPI's manual can be used instead?

Unfortunately, at this time, the only documentation available for
Open MPI is the FAQ and the mailing list. There are some fairly
significant differences between Open MPI and LAM/MPI, so while the
LAM/MPI manuals could be a start, there are some fairly significant
differences.

> 2. Another question is about installation.
> If I want to use rsh/ssh for Open MPI, do I have to install
> Open MPI on all master and slave nodes? Or, should I use
> something like NSF file system so that even though I installed
> Open MPI on only master node, all the other salve nodes could
> see Open MPI installation in the master node?

Like LAM/MPI, Open MPI doesn't really care on this point. This is
also somewhat of a religious point -- people seem to have strong
opinions either way. The advantage of the NFS approach is that it
makes it trivial to keep the software installs in sync on all the
nodes. The advantage of the installation on local disk approach is
that there is significantly less strain on the NFS server during
process startup. For development, I tend to go with the NFS
approach, since I'm constantly updating my installation. For large
cluster production installs, I prefer the installation on each node
approach. But unless your cluster is really large (or your NFS
server is really slow) either approach should work.

> The error I have is from I wanted to run a program on two slave
> nodes but shell complained that there is no orted. It is true
> that there is no installation of Open MPI on each slave node.

Yes, that is the expected error if you try to run on a node without
an Open MPI installation. If you ensure that a copy of Open MPI is
installed (and in your path) on each node, your problem should go away.

Hope this helps,

Brian

-- 
   Brian Barrett
   Open MPI developer
   http://www.open-mpi.org/