This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
I've compiled Open MPI 1.6.3 with --enable-mpi-thread-multiple -with-tm
When I use for example the pingpong benchmark from the Intel MPI
Benchmarks, which call MPI_Init the btl openib is used and everything
When instead the benchmark calls MPI_Thread_init with
MPI_THREAD_MULTIPLE as requested threading level the btl openib fails
to load but gives no further hints for the reason:
mpirun -v -n 2 -npernode 1 -gmca btl_base_verbose 200 ./imb-
[l0519:08267] select: initializing btl component openib
[l0519:08267] select: init of component openib returned failure
[l0519:08267] select: module openib unloaded
The question is now, is currently just the support for
MPI_THREADM_MULTIPLE missing in the openib module or are there other
errors occurring and if so, how to identify them.
Attached ist the config.log from the Open MPI build, the ompi_info
output and the output of the IMB pingpong bechmarks.
As system used were two nodes with:
- OpenFabrics 1.5.3
- CentOS release 5.8 (Final)
- Linux Kernel 2.6.18-308.11.1.el5 x86_64
- OpenSM 3.3.3
[l0519] src > ibv_devinfo
transport: InfiniBand (0)
state: PORT_ACTIVE (4)
max_mtu: 2048 (4)
active_mtu: 2048 (4)
Thanks for the help in advance.
Markus Wittmann, HPC Services
Regionales Rechenzentrum Erlangen (RRZE)
Martensstrasse 1, 91058 Erlangen, Germany
Tel.: +49 9131 85-20104