Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2005-11-10 09:35:13

I'm sorry -- I wasn't entirely clear:

1. Are you using a 1.0 nightly tarball or a 1.1 nightly tarball? We
have made a bunch of fixes to the 1.1 tree (i.e., the Subversion
trunk), but have not fully vetted them yet, so they have not yet been
taken to the 1.0 release branch yet. If you have not done so already,
could you try a tarball from the trunk?

2. The error you are seeing looks like a proxy process is failing to
start because it seg faults. Are you getting corefiles? If so, can
you send the backtrace? The corefile should be from the
$prefix/bin/orted executable.

3. Failing that, can you run with the "-d" switch? It should give a
bunch of debugging output that might be helpful. "mpirun -d -np 2
./test", for example.

4. Also please send the output of the "ompi_info" command.

On Nov 10, 2005, at 9:05 AM, Clement Chu wrote:

> I have tried the latest version (rc5 8053), but the error is still
> here.
> Jeff Squyres wrote:
>> We've actually made quite a few bug fixes since RC4 (RC5 is not
>> available yet). Would you mind trying with a nightly snapshot
>> tarball?
>> (there were some SVN commits last night after the nightly snapshot was
>> made; I've just initiated another snapshot build -- r8085 should be on
>> the web site within an hour or so)
>> On Nov 10, 2005, at 4:38 AM, Clement Chu wrote:
>>> Hi,
>>> I got an error when tried the mpirun on mpi program. The following
>>> is
>>> the error message:
>>> [clement_at_kfc TestMPI]$ mpicc -g -o test main.c
>>> [clement_at_kfc TestMPI]$ mpirun -np 2 test
>>> mpirun noticed that job rank 1 with PID 0 on node "localhost" exited
>>> on
>>> signal 11.
>>> [kfc:28466] ERROR: A daemon on node localhost failed to start as
>>> expected.
>>> [kfc:28466] ERROR: There may be more information available from
>>> [kfc:28466] ERROR: the remote shell (see above).
>>> [kfc:28466] The daemon received a signal 11.
>>> 1 additional process aborted (not shown)
>>> [clement_at_kfc TestMPI]$
>>> I am using openmpi-1.0rc4 and running on Linux Redhat Fedora Core 4.
>>> The kernal is 2.6.12-1.1456_FC4. My building procedure is as below:
>>> 1. ./configure --prefix=/home/clement/openmpi --with-devel-headers
>>> 2. make all install
>>> 3. login root. add openmpi's path and lib in /etc/bashrc
>>> 4. see the $PATH and $LD_LIBRARY_PATH as below
>>> [clement_at_kfc TestMPI]$ echo $PATH
>>> /usr/java/jdk1.5.0_05/bin:/home/clement/openmpi/bin:/usr/java/
>>> jdk1.5.0_05/bin:/home/clement/mpich-1.2.7/bin:/usr/kerberos/bin:/usr/
>>> local/bin:/usr/bin:/bin:/usr/X11R6/bin:/home/clement/bin
>>> [clement_at_kfc TestMPI]$ echo $LD_LIBRARY_PATH
>>> /home/clement/openmpi/lib
>>> [clement_at_kfc TestMPI]$
>>> 5. go to mpi program's directory
>>> 6. mpicc -g -o test main.c
>>> 7. mpirun -np 2 test
>>> Any idea of this problem. Many thanks.

{+} Jeff Squyres
{+} The Open MPI Project