Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: [OMPI users] Problems with btl openib and MPI_THREAD_MULTIPLE
From: Markus Wittmann (markus.wittmann_at_[hidden])
Date: 2012-11-07 07:13:45


Hello,

I've compiled Open MPI 1.6.3 with --enable-mpi-thread-multiple -with-tm
-with-openib --enable-opal-multi-threads.

When I use for example the pingpong benchmark from the Intel MPI
Benchmarks, which call MPI_Init the btl openib is used and everything
works fine.

When instead the benchmark calls MPI_Thread_init with
MPI_THREAD_MULTIPLE as requested threading level the btl openib fails
to load but gives no further hints for the reason:

mpirun -v -n 2 -npernode 1 -gmca btl_base_verbose 200 ./imb-
tm-openmpi-ts pingpong

...
[l0519:08267] select: initializing btl component openib
[l0519:08267] select: init of component openib returned failure
[l0519:08267] select: module openib unloaded
...

The question is now, is currently just the support for
MPI_THREADM_MULTIPLE missing in the openib module or are there other
errors occurring and if so, how to identify them.

Attached ist the config.log from the Open MPI build, the ompi_info
output and the output of the IMB pingpong bechmarks.

As system used were two nodes with:

  - OpenFabrics 1.5.3
  - CentOS release 5.8 (Final)
  - Linux Kernel 2.6.18-308.11.1.el5 x86_64
  - OpenSM 3.3.3

[l0519] src > ibv_devinfo
hca_id: mlx4_0
        transport: InfiniBand (0)
        fw_ver: 2.7.000
        node_guid: 0030:48ff:fff6:31e4
        sys_image_guid: 0030:48ff:fff6:31e7
        vendor_id: 0x02c9
        vendor_part_id: 26428
        hw_ver: 0xB0
        board_id: SM_2122000001000
        phys_port_cnt: 1
                port: 1
                        state: PORT_ACTIVE (4)
                        max_mtu: 2048 (4)
                        active_mtu: 2048 (4)
                        sm_lid: 48
                        port_lid: 278
                        port_lmc: 0x00

Thanks for the help in advance.

Regards,
Markus

-- 
Markus Wittmann, HPC Services
Friedrich-Alexander-Universität Erlangen-Nürnberg
Regionales Rechenzentrum Erlangen (RRZE)
Martensstrasse 1, 91058 Erlangen, Germany
Tel.: +49 9131 85-20104
markus.wittmann_at_[hidden]
http://www.rrze.fau.de/hpc/