I have already seen this faq. Nodes in cluster does not have multiple
IP addresses. One thing i forgot to mention is that systems in cluster does
not have static IPs and get IP address through DHCP.
Also if there is a print statement (printf("hello world\n"); ) in slave it
is correctly printed on masters consoles but none of MPI commands work.
>I need to make that error string be google-able -- I'll add it to the
>The problem is likely that you have multiple IP addresses, some of
>which are not routable to each other (but fail OMPI's routability
>assumptions). Check out these FAQ entries:
>Does this help?
>On Apr 19, 2007, at 11:07 AM, Babu Bhai wrote:
>> I have migrated from LAM/MPI to OpenMPI. I am not able to
>> execute simple mpi code in which master sends an integer to slave.
>> If i execute code on single machine i.e start 2 instance on same
>> machine (mpirun -np 2 hello) this works fine.
>> If i execute in cluster using mpirun --prefix /usr /local -
>> np 2 --host 184.108.40.206,220.127.116.11 hello
>> it gives following error "btl_tcp_endpoint.c:
>> 572:mca_btl_tcp_endpoint_complete_connect] connect() failed with
> >I am using openmpi-1.2
> >users mailing list