Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] mpirun unsuccessful when run across multiple nodes
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-04-20 10:48:10


You need to compile your cpi.c to get an executable. This is not an MPI issue. :-)

Also, mpdboot is part of a different MPI implementation named MPICH; you don't need to run mpdboot with Open MPI. If you have further questions about MPICH, you'll need to ping them on their mailing list -- we aren't able to answer MPICH questions here, sorry!

(background: MPI = a book. It's a specification. There's a bunch of different implementations of that specification available; Open MPI is one [great] one. :-) MPICH is another. There are also others.)

On Apr 20, 2011, at 10:24 AM, mohd naseem wrote:

> folloeing error shows
>
> [mpiuser_at_f2 programrun]$ mpiexec -np 4 ./cpi.c
> problem with execution of ./cpi.c on f2: [Errno 13] Permission denied
> problem with execution of ./cpi.c on f2: [Errno 13] Permission denied
> [mpiuser_at_f2 programrun]$ mpdboot -n 2 -v
> totalnum=2 numhosts=1
> there are not enough hosts on which to start all processes
>
>
>
> On Wed, Apr 20, 2011 at 7:51 PM, mohd naseem <naseemshakeel_at_[hidden]> wrote:
> sir i m still not able to trace all the hosts
> following error shows
>
>
>
> [mpiuser_at_f2 programrun]$ mpiexec -np 4 ./cpi.c
> problem with execution of ./cpi.c on f2: [Errno 13] Permission denied
> problem with execution of ./cpi.c on f2: [Errno 13] Permission denied
>
>
>
> On Tue, Apr 19, 2011 at 8:25 PM, Ralph Castain <rhc_at_[hidden]> wrote:
> You have to tell mpiexec what nodes you want to use for your application. This is typically done either on the command line or in a file. For now, you could just do this:
>
> mpiexec -host node1,node2,node3 -np N ./my_app
>
> where node1,node2,node3,... are the names or IP addresses of the nodes you want to run on, and N is the number of total processes you want executed.
>
>
> On Apr 19, 2011, at 8:47 AM, mohd naseem wrote:
>
>>
>> sorry sir,
>>
>> i am unable to understand what u are saying ? becoz i am a new user of mpi.
>>
>> please tell me details about it and command also
>>
>> thanks
>>
>>
>>
>> On Tue, Apr 19, 2011 at 2:32 PM, Reuti <reuti_at_[hidden]> wrote:
>> Good, then please supply a hostfile with the names of the machines you want to run for a particular run and give it as option to `mpiexec`. See options -np and -machinefile.
>>
>> -- Reuti
>>
>>
>> Am 19.04.2011 um 06:38 schrieb mohd naseem:
>>
>> > sir
>> > when i give mpiexec hostname command.
>> > it only give one hostname. rest are not shown.
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Apr 18, 2011 at 7:46 PM, Reuti <reuti_at_[hidden]> wrote:
>> > Am 18.04.2011 um 15:40 schrieb chenjie gu:
>> >
>> > > I am a green hand on Openmpi, I have the following Openmpi structure, however it has problem when running across multiple nodes.
>> > > I am trying to build a Bewolf Cluster between 6 nodes of our serve (HP Proliant G460 G7), I have installed the Openmpi on one node (assuming at /mirror),
>> > > ./configure --prefix=/mirror/openmpi CC=icc CXX=icpc F77=ifort FC=ifort
>> > > make all install
>> > >
>> > > using NFS, the directory of /mirror was successfully exported to the rest of 5 nodes. Now as I test the Openmpi, it runs very well on a single node,
>> > > however it hangs across multiple nodes.
>> > >
>> > > Now one possible reason as I know is that Openmpi uses TCP to exchange data between different nodes, so I am worried about
>> > > whether there are firewalls between each nodes, which can be factory integrated at somewhere(switch/NIC). Could anyone give me some
>> > > information on this point?
>> >
>> > It's not only about MPI communcation. Before you need some means to allow the startup of the local orte daemons on each machine by passphraseless ssh-keys or better hostbased authentication http://arc.liv.ac.uk/SGE/howto/hostbased-ssh.html , or enable `rsh` on the machines and tell Open MPI to use it. Is:
>> >
>> > mpiexec hostname
>> >
>> > giving you a list of the involved machines?
>> >
>> > -- Reuti
>> >
>> >
>> > > Thanks a lot,
>> > > Regards,
>> > > ArchyGU
>> > > Nanyang Technological University
>> > > _______________________________________________
>> > > users mailing list
>> > > users_at_[hidden]
>> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> >
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>> >
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/