Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] Returned "Unreachable" (-12) instead of "Success" (0)
From: devendra rai (rai.devendra_at_[hidden])
Date: 2012-05-16 06:22:51


Hello All, I am trying to run an OpenMPI application across two physical machines. I get an error "Returned "Unreachable" (-12) instead of "Success" (0)", and looking through the logs (attached), I cannot seem to find out the cause, and how to fix it. I see lot of (communication) components being loaded and then unloaded, and I do not see which nodes pick up what kind of comm-interface. -------------------------------------------------------------------------- At least one pair of MPI processes are unable to reach each other for MPI communications.  This means that no Open MPI device has indicated that it can be used to communicate between these processes.  This is an error; Open MPI requires that all MPI processes be able to reach each other.  This error can sometimes be the result of forgetting to specify the "self" BTL.   Process 1 ([[10782,1],6]) is on host: tik34x   Process 2 ([[10782,1],0]) is on host: tik33x   BTLs attempted: self sm tcp Your MPI job is now going to abort; sorry. The "mpirun" line is: mpirun --mca btl self,sm,tcp --mca btl_base_verbose 30 -report-pid -display-map -report-bindings -hostfile hostfile -np 7 -v --rankfile rankfile.txt -v --timestamp-output --tag-output ./xstartwrapper.sh ./run_gdb.sh  where the .sh files are fixes for forwarding X-windows from multiple machines to the machines where I am logged in. Can anyone help? Thanks a lot. Best, Devendra

attached mail follows: