Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] OSU_latency_mt is failing
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-12-06 09:13:11


The openib BTL does not support threading. Hence, it disabled itself at run-time, and therefore Open MPI decided that there were no network paths available between the 2 MPI processes, and aborted.

On Dec 6, 2011, at 4:17 AM, bhimesh akula wrote:

> Hi,
>
> I tried execute the Osu_latency_mt as mentioned below
>
> First build the openmpi with Multi-threaded support as Osu_latency_mt needed
>
> > [root_at_localhost openmpi-1.4.3]# ./configure --with-threads=posix --enable-mpi-threads
>
> > make && make install
>
>
> > [root_at_localhost openmpi-1.4.3]# mpirun --prefix /usr/local/ -np 2 --mca btl openib,self -H 192.168.0.174,192.168.0.175 /root/ramu/ofed_pkgs/osu_benchmarks-3.1.1/osu_latency_mt
>
> --------------------------------------------------------------------------
> WARNING: No preset parameters were found for the device that Open MPI
> detected:
>
> Local host: test2
> Device name: plx2_0
> Device vendor ID: 0x10b5
> Device vendor part ID: 4277
>
> Default device parameters will be used, which may result in lower
> performance. You can edit any of the files specified by the
> btl_openib_device_param_files MCA parameter to set values for your
> device.
>
> NOTE: You can turn off this warning by setting the MCA parameter
> btl_openib_warn_no_device_params_found to 0.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> At least one pair of MPI processes are unable to reach each other for
> MPI communications. This means that no Open MPI device has indicated
> that it can be used to communicate between these processes. This is
> an error; Open MPI requires that all MPI processes be able to reach
> each other. This error can sometimes be the result of forgetting to
> specify the "self" BTL.
>
> Process 1 ([[29990,1],0]) is on host: localhost.localdomain
> Process 2 ([[29990,1],1]) is on host: 192
> BTLs attempted: self
>
> Your MPI job is now going to abort; sorry.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort. There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems. This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
> PML add procs failed
> --> Returned "Unreachable" (-12) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** The MPI_Init_thread() function was called before MPI_INIT was invoked.
> *** This is disallowed by the MPI standard.
> *** Your MPI job will now abort.
> [localhost.localdomain:32216] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 32216 on
> node localhost.localdomain exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> *** The MPI_Init_thread() function was called before MPI_INIT was invoked.
> *** This is disallowed by the MPI standard.
> *** Your MPI job will now abort.
> [test2:2104] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!
> [localhost.localdomain:32214] 1 more process has sent help message help-mca-bml-r2.txt / unreachable proc
> [localhost.localdomain:32214] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
> [localhost.localdomain:32214] 1 more process has sent help message help-mpi-runtime / mpi_init:startup:internal-failure
>
>
> Remaining all MPI cases executed well only this case creating problem .. "The MPI_Init_thread() function was called before MPI_INIT was invoked "
>
> Please give suggestions to execute this.
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/