Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Daniël Mantione (daniel.mantione_at_[hidden])
Date: 2006-06-26 10:55:08


Hi!

Just tried out OpenMPI 1.1. First impression is that it doesn't seem to
be able to run OpenMPI 1.0.2 executables. The result of such an attempt
can be seen below.

Is it right that OpenMPI 1.1 cannot run 1.0.2 executables? If yes,
shouldn't the major version of the library have been increased?

Daniël Mantione

ham1:/usr/local/Cluster-Apps/openmpi/intel # cat /tmp/x
stfile hostfile -np 4 ./yafs.bin 1.2M
./yafs.bin: Symbol `ompi_mpi_comm_world' has different size in shared
object, consider re-linking
./yafs.bin: Symbol `ompi_mpi_comm_world' has different size in shared
object, consider re-linking
./yafs.bin: Symbol `ompi_mpi_comm_world' has different size in shared
object, consider re-linking
./yafs.bin: Symbol `ompi_mpi_comm_world' has different size in shared
object, consider re-linking
--------------------------------------------------------------------------
The MCA parameter "mpi_paffinity_alone" was set to a nonzero value,
but Open MPI was unable to bind MPI_COMM_WORLD rank 0 to a processor.

Typical causes for this problem include:

   - A node was oversubscribed (more processes than processors), in
     which case Open MPI will not bind any processes on that node
   - A startup mechanism was used which did not tell Open MPI which
     processors to bind processes to
--------------------------------------------------------------------------
--------------------------------------------------------------------------
The MCA parameter "mpi_paffinity_alone" was set to a nonzero value,
but Open MPI was unable to bind MPI_COMM_WORLD rank 3 to a processor.

Typical causes for this problem include:

   - A node was oversubscribed (more processes than processors), in
     which case Open MPI will not bind any processes on that node
   - A startup mechanism was used which did not tell Open MPI which
     processors to bind processes to
--------------------------------------------------------------------------
--------------------------------------------------------------------------
The MCA parameter "mpi_paffinity_alone" was set to a nonzero value,
but Open MPI was unable to bind MPI_COMM_WORLD rank 1 to a processor.

Typical causes for this problem include:

   - A node was oversubscribed (more processes than processors), in
     which case Open MPI will not bind any processes on that node
   - A startup mechanism was used which did not tell Open MPI which
     processors to bind processes to
--------------------------------------------------------------------------
--------------------------------------------------------------------------
The MCA parameter "mpi_paffinity_alone" was set to a nonzero value,
but Open MPI was unable to bind MPI_COMM_WORLD rank 2 to a processor.

Typical causes for this problem include:

   - A node was oversubscribed (more processes than processors), in
     which case Open MPI will not bind any processes on that node
   - A startup mechanism was used which did not tell Open MPI which
     processors to bind processes to
--------------------------------------------------------------------------
Signal:11 info.si_errno:0(Success) si_code:128()
Failing at addr:(nil)
[0] func:/usr/local/Cluster-Apps/openmpi/intel/64/1.1/lib64/libopal.so.0
[0x2aaaab19eba9]
[1] func:/lib64/tls/libpthread.so.0 [0x2aaaabde22c0]
[2] func:/lib64/tls/libc.so.6 [0x2aaaabf91b44]
[3] func:/lib64/tls/libc.so.6 [0x2aaaabf92f73]
[4] func:./yafs.bin(_ZN6IoData11readCmdLineEiPPc+0x31) [0x4cec89]
[5] func:./yafs.bin(main+0x90) [0x4557f8]
[6] func:/lib64/tls/libc.so.6(__libc_start_main+0xda) [0x2aaaabf075aa]
[7] func:./yafs.bin(__gxx_personality_v0+0x22a) [0x4556ca]
*** End of error message ***