Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: [OMPI users] hwloc error in topology.c in OMPI 1.6.5
From: Gus Correa (gus_at_[hidden])
Date: 2014-02-27 18:04:42

Dear OMPI pros

This seems to be a question in the nowhere land between OMPI and hwloc.
However, it appeared as an OMPI error, hence it may be OK to ask the
question in this list.


A user here got this error (or warning?) message today:

+ mpiexec -np 64 $HOME/echam-aiv_ldeo_6.1.00p1/bin/echam6
* Hwloc has encountered what looks like an error from the operating system.
* object intersection without inclusion!
* Error occurred in topology.c line 594
* Please report this error message to the hwloc user's mailing list,
* along with the output from the script.

Additional info:

1) We have OMPI 1.6.5. This user is using the one built
with Intel compilers 2011.13.367.

2) I set these MCA parameters in $OMPI/etc/openmpi-mca-params.conf
(includes binding to core):

btl = ^tcp
orte_tag_output = 1
rmaps_base_schedule_policy = core
orte_process_binding = core
orte_report_bindings = 1
opal_paffinity_alone = 1

3) The machines have dual-socket 16-core AMD Opteron 6376 (Abu-Dhabi),
which have one FPU for each pair of cores, a hierarchy of caches serving
sub-groups of cores, etc.
The OS is Linux CentOS 6.4 with stock CentOS OFED.
Interconnect is Infiniband QDR (Mellanox HW).

4) We have Torque 4.2.5, built with cpuset support.
OMPI is built with Torque (tm) support.

5) In case it helps, I attach the output of
hwloc-gather-topology, which I ran on the node that threw the error,
although not immediately after the job failure.
I used the hwloc-gather-topology script that comes with
the hwloc (version 1.5) provided by CentOS.
As far as I can tell the hwloc nuts and bits built into OMPI
do not include the hwloc-gather-topology script (although it may be a
newer hwloc version. 1.8 perhaps?).
Hopefully the mail servers won't chop off the attachments.

6) I am a bit surprised by this error message, because I haven't
seen it before, although we have used OMPI 1.6.5 in
this machine with several other programs without problems.
Alas, it happened now.


- Is this a known hwloc problem in this processor architecture?

- Is this a known issue in this combination of HW and SW?

- Would not binding the MPI processes (to core or socket), perhaps help?

- Any workarounds or suggestions?


Thank you,
Gus Correa