Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: [OMPI devel] trunk broken?
From: Eugene Loh (eugene.loh_at_[hidden])
Date: 2012-08-30 10:50:29


Trunk broken? Last night, Oracle's MTT trunk runs all came up empty
handed. E.g.,

*** Process received signal ***
Signal: Segmentation fault (11)
Signal code: Address not mapped (1)
Failing at address: (nil)
[ 0] [0xffffe600]
[ 1] /lib/libc.so.6(strlen+0x33) [0x3fa0a3]
[ 2] /lib/libc.so.6(__strdup+0x25) [0x3f9de5]
[ 3] .../lib/openmpi/mca_db_hash.so [0xf7bbdd34]
[ 4] .../lib/libmpi.so.0(orte_util_decode_pidmap+0x5f4) [0xf7e46654]
[ 5] .../lib/libmpi.so.0(orte_util_nidmap_init+0x1b4) [0xf7e46d54]
[ 6] .../lib/openmpi/mca_ess_env.so [0xf7bc4f62]
[ 7] .../lib/libmpi.so.0(orte_init+0x160) [0xf7e2d250]
[ 8] .../lib/libmpi.so.0(ompi_mpi_init+0x163) [0xf7de2133]
[ 9] .../lib/libmpi.so.0(MPI_Init+0x13f) [0xf7dfb6df]
[10] ./c_ring [0x8048759]
[11] /lib/libc.so.6(__libc_start_main+0xdc) [0x3a0dec]
[12] ./c_ring [0x80486a1]
*** End of error message ***

r27182. The previous night, with r27175, ran fine. Quick peek at 27178
seems fine (I think).