Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Setting up Open MPI to run on multiple servers
From: Rayne (lancer6238_at_[hidden])
Date: 2008-08-12 03:00:35

Hi, thanks for your reply.

I did what you said, set up the password-less ssh, nfs etc, and put the IP address of the server in the default hostfile (in my PC only, the default hostfile in the server does not contain any IP addresses). Then I installed Open MPI in the server under the same directory as my PC, e.g. /usr/lib/openmpi/1.2.5-gcc/
All my MPI programs and executables, e.g. a.out are in the shared folder. However, I have trouble running the MPI programs.

After compiling my MPI program on my PC, I tried to run it via "mpiexec -n 2 ./a.out". However, I get the error message

"Failed to find or execute the following executable:
Host: (the name of the server)
Executable: ./a.out

Cannot continue"

Then when I tried to run the MPI program on my server after compiling, I get the error:

"Lamnodes Failed!
Check if you had booted lam before calling mpiexec else use -machinefile to pass host file to mpiexec"

I'm guessing that because the server cannot run the MPI program, I can't run the program on my PC as well. Is there some other configurations I missed when using Open MPI on my server?

Thank you.


--- On Tue, 12/8/08, Joshua Bernstein <jbernstein_at_[hidden]> wrote:

> From: Joshua Bernstein <jbernstein_at_[hidden]>
> Subject: Re: [OMPI users] Setting up Open MPI to run on multiple servers
> To: lancer6238_at_[hidden], "Open MPI Users" <users_at_[hidden]>
> Date: Tuesday, 12 August, 2008, 8:34 AM
> Rayne wrote:
> > Hi all,
> > I am trying to set up Open MPI to run on multiple
> servers, but as I
> > have very little experience in networking, I'm
> getting confused by the
> > info on, with the .rhosts, rsh, ssh etc.
> >
> > Basically what I have now is a PC with Open MPI
> installed. I want to
> > connect it to, say, 10 servers, so I can run MPI
> programs on all 11
> > nodes. From what I've read, I think I need to
> install Open MPI on the
> > 10 servers too, and there must be a shared directory
> where I keep all
> > the MPI programs I've written, so all nodes can
> access them.
> >
> > Then I need to create a machine file on my local PC (I
> found a default
> > hostfile "openmpi-default-hostfile" in
> {prefix}/etc/. Can I use that
> > instead so I need not have "-machinefile
> machine" with every mpiexec?)
> > with the list of the 10 servers. I'm assuming I
> need to put down the
> > IP addresses of the 10 servers in this file. I've
> also read that the
> > 10 servers also need to each have a .rhosts file that
> tells them the
> > machine (i.e. my local PC) and user from which the
> programs may be
> > launched from. Is this right?
> >
> > There is also the rsh/ssh configuration, which I find
> the most
> > confusing. How do I know whether I'm using rsh or
> ssh? Is following
> > the instructions on
> under
> > "3: How can I make ssh not ask me for a
> password?" sufficient? Does
> > this mean that when I'm using the 10 servers to
> run the MPI program,
> > I'm login to them via ssh? Is this necessary in
> every case?
> >
> > Is doing all of the above all it takes to run MPI
> programs on all 11
> > nodes, or is there something else I missed?
> More or less. Though the first step is to setup
> password-less SSH
> between all 11 machines. I'd completely skip the use of
> RSH as its very
> insecure and shouldn't be used in non-dedicated
> cluster, and even
> then... You should basically setup SSH so a user can SSH
> from one node
> to another without specify a password or entering in any
> other information.
> Then, the next is to setup NFS. NFS provides you with a way
> to share a
> directory on one computer, to many other computers avoiding
> the hassel
> of having to copy all your MPI programs to all of the
> nodes. This is
> generally as easy as configuring /etc/exports, and then
> just mounting
> the directory on the other computers. Be Sure you mount the
> directories
> in the same place on every node though.
> Lastly, give your MPI programs a shot. While you don't
> need to have a
> hostlist, because you can specify the hostname (or IPs). on
> the mpirun
> command line. But you your case its likely a good idea.
> Hope that gets you started...
> -Joshua Bernstein
> Software Engineer
> Penguin Computing

      New Email names for you!
Get the Email name you&#39;ve always wanted on the new @ymail and @rocketmail.
Hurry before someone else does!