Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] mpirun unsuccessful when run across multiple nodes
From: Reuti (reuti_at_[hidden])
Date: 2011-04-19 05:02:54


Good, then please supply a hostfile with the names of the machines you want to run for a particular run and give it as option to `mpiexec`. See options -np and -machinefile.

-- Reuti

Am 19.04.2011 um 06:38 schrieb mohd naseem:

> sir
> when i give mpiexec hostname command.
> it only give one hostname. rest are not shown.
>
>
>
>
>
>
> On Mon, Apr 18, 2011 at 7:46 PM, Reuti <reuti_at_[hidden]> wrote:
> Am 18.04.2011 um 15:40 schrieb chenjie gu:
>
> > I am a green hand on Openmpi, I have the following Openmpi structure, however it has problem when running across multiple nodes.
> > I am trying to build a Bewolf Cluster between 6 nodes of our serve (HP Proliant G460 G7), I have installed the Openmpi on one node (assuming at /mirror),
> > ./configure --prefix=/mirror/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
> > make all install
> >
> > using NFS, the directory of /mirror was successfully exported to the rest of 5 nodes. Now as I test the Openmpi, it runs very well on a single node,
> > however it hangs across multiple nodes.
> >
> > Now one possible reason as I know is that Openmpi uses TCP to exchange data between different nodes, so I am worried about
> > whether there are firewalls between each nodes, which can be factory integrated at somewhere(switch/NIC). Could anyone give me some
> > information on this point?
>
> It's not only about MPI communcation. Before you need some means to allow the startup of the local orte daemons on each machine by passphraseless ssh-keys or better hostbased authentication http://arc.liv.ac.uk/SGE/howto/hostbased-ssh.html , or enable `rsh` on the machines and tell Open MPI to use it. Is:
>
> mpiexec hostname
>
> giving you a list of the involved machines?
>
> -- Reuti
>
>
> > Thanks a lot,
> > Regards,
> > ArchyGU
> > Nanyang Technological University
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users