Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Galen M. Shipman (gshipman_at_[hidden])
Date: 2006-11-20 15:25:55


>m2001(120) > mpirun -np 6 -hostfile hostsfile -mca btl mx,self b_eff
>
>

This does appear to be a bug, although you are using the MX BTL. Our
higher performance path is the MX MTL. To use this try:

mpirun -np 6 -hostfile hostsfile -mca pml cm b_eff

Also, just for grins, could you try:

mpirun -np 6 -hostfile hostsfile -mca btl mx,sm,self b_eff

This will use the BTL interface but provides Shared Memory between
processes on the same node.

Thanks,

Galen

>--------------------------------------------------------------------------
>Process 0.1.0 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.2 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.4 is unable to reach 0.1.4 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.1 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.5 is unable to reach 0.1.4 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.3 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> --------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>m2001(121) > mpirun -np 4 -hostfile hostsfile -mca btl mx b_eff
>--------------------------------------------------------------------------
>Process 0.1.0 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.1 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.2 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>Process 0.1.3 is unable to reach 0.1.0 for MPI communication.
>If you specified the use of a BTL component, you may have
>forgotten a component (such as "self") in the list of
>usable components.
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>--------------------------------------------------------------------------
>It looks like MPI_INIT failed for some reason; your parallel process is
>likely to abort. There are many reasons that a parallel process can
>fail during MPI_INIT; some of which are due to configuration or environment
>problems. This failure appears to be an internal failure; here's some
>additional information (which may only be relevant to an Open MPI
>developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
>--------------------------------------------------------------------------
>*** An error occurred in MPI_Init
>*** before MPI was initialized
>*** MPI_ERRORS_ARE_FATAL (goodbye)
>
>
>------------------------------------------
>Dr E L Heck
>
>University of Durham
>Institute for Computational Cosmology
>Ogden Centre
>Department of Physics
>South Road
>
>DURHAM, DH1 3LE
>United Kingdom
>
>e-mail: lydia.heck_at_[hidden]
>
>Tel.: + 44 191 - 334 3628
>Fax.: + 44 191 - 334 3645
>___________________________________________
>_______________________________________________
>users mailing list
>users_at_[hidden]
>http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>