As an another similar question about installation,
I think that installation of Open MPI should be done on the
master and all slave nodes. A program which use MPI feature
also seems to have to be installed on the master and all slave nodes
unless I use NFS. My question is that:
if I used OpenPBS software rather than rsh/ssh,
is this installation problem of Open MPI and/or a MPI featured program
solved? Or, even though I used OpenPBS, need I have
each copy of my MPI-featured program installed on the master and
all the slave nodes?
My question can be about the difference between using rsh/ssh
and using OpenPBS.
Thank you, Brian:
Thank you, again.On 4/16/06, Brian Barrett <email@example.com > wrote:On Apr 16, 2006, at 8:18 AM, Sang Chul Choi wrote:
> 1. I could not find any document except FAQ and mailing list
> for Open MPI. Is there any user manual or something like that?
> Or, the LAM MPI's manual can be used instead?
Unfortunately, at this time, the only documentation available for
Open MPI is the FAQ and the mailing list. There are some fairly
significant differences between Open MPI and LAM/MPI, so while the
LAM/MPI manuals could be a start, there are some fairly significant
> 2. Another question is about installation.
> If I want to use rsh/ssh for Open MPI, do I have to install
> Open MPI on all master and slave nodes? Or, should I use
> something like NSF file system so that even though I installed
> Open MPI on only master node, all the other salve nodes could
> see Open MPI installation in the master node?
Like LAM/MPI, Open MPI doesn't really care on this point. This is
also somewhat of a religious point -- people seem to have strong
opinions either way. The advantage of the NFS approach is that it
makes it trivial to keep the software installs in sync on all the
nodes. The advantage of the installation on local disk approach is
that there is significantly less strain on the NFS server during
process startup. For development, I tend to go with the NFS
approach, since I'm constantly updating my installation. For large
cluster production installs, I prefer the installation on each node
approach. But unless your cluster is really large (or your NFS
server is really slow) either approach should work.
> The error I have is from I wanted to run a program on two slave
> nodes but shell complained that there is no orted. It is true
> that there is no installation of Open MPI on each slave node.
Yes, that is the expected error if you try to run on a node without
an Open MPI installation. If you ensure that a copy of Open MPI is
installed (and in your path) on each node, your problem should go away.
Hope this helps,
Open MPI developer
users mailing list
Live, Learn, and Love!
E-mail : goshng at empal dot com
goshng at gmail dot com
Home : +1-919-468-2578
Address : 1528 Macalpine Circle
Morrisville, NC 27560