On Oct 30, 2007, at 9:42 AM, Jorge Parra wrote:
> Thank you for your reply. Linux does not freeze. The one that
> freezes is
> OpenMPI. Sorry for my unaccurate choice of words that led to
> Therefore dmesg does not show anything abnormal (I attached to this
> a full dmesg log, captured when openmpi freezes).
> When openmpi ferezes I can, from another terminal, see that the
> node on
> which openmpi is originaly run (the local one) has two processes:
> and mpirun. The remote node has one: orted. This seems to be normal.
> However, in both nodes there are not any openmpi activity. There is
> an initial "calling init" printout in the local node (I included it in
> the greetins.c program for testing purposes).
> Unfortunately, I have not been able to compile openmpi 1.2.4 or any
> of the
> 1.2 trunk versions. Trunks 1.0 and 1.1 copiled well in my system. I
> already opened a case for this, but I received a message that the
> it was assigned is in paternal leave. So I think I need to wait a
> bit for
> help on that :). So I am stuck with version 1.1.5.
Are you referring to this thread:
There's currently only one person on paternal leave, and although he
is the powerpc guy :-), he's not really the build system guy (I'm
kinda *guessing* that either OMPI or libltdl is choosing to build or
link the wrong object -- but that's a SWAG without seeing any
I sent you a reply on 24 Oct asking for a bit more information:
> I am running openmpi as root because my system has some special
> conditions. This is an attempt to make an embedded Massive Parallel
> Processor (MPP), so the nodes are running embedded versions of linux,
> where normally there is just one user (root). Since this is an
> system, I did not thing this could be a problem (I don't care about
> security issues too).
> Again, thank you for all your help,
> On Tue, 30 Oct 2007, Rainer Keller wrote:
>> Hello Jorge,
>> On Monday 29 October 2007 18:27, Jorge Parra wrote:
>>> When running openMPI my system freezes when initializing MPI
>>> MPI_init). This happens only when I try to run the process in
>>> nodes in my cluster. Running multiple instances of the testing code
>>> locally (i.e ./mpirun -np 2 greetings) is succesful.
>> would it be possible to repeat the tests with the latest Open
>> Even though nothing in Open MPI should make Your system freeze.
>> Could You check the logs on the nodes and possibly have a dmesg
>> created just
>> before the MPI_Init...
>>> - rsh runs well, and is configured to full access. (i.e. rsh
>>> "192.168.1.103 date" is succesful, so they are "rsh AFRLMPPBM2
>>> date" or
>>> "rsh AFRLMPPBM2.MPPdomain.com"). Security is not an issue in this
>>> - uname -n and hostname return a valid hostname
>>> - The testing code (attached to this email) is run (and fails) as:
>>> ./mpirun --hostfile /root/hostfile -np 2 greetings . The hostfile
>>> has the
>>> names of the localnode (first entry:AFRLMPPBM1) and the remote node
>>> (second entry: AFRLMPPBM2). This file is also attached to this
>>> - The environment variables seem to be properly set (see env.log
>>> file). Local mpi programs (i.e. ./mpirun -np 2 greetings) run well.
>>> -.profile has the path information for both the executables and the
>>> - orted runs in the remote node, however it does not print
>>> anything in
>>> console. The only output in the remote node is:
>>> pam_rhosts_auth: user root has a `+' user entry
>>> pam_rhosts_auth: allowed to root_at_[hidden] as
>>> PAM_unix: (rsh) session opened for user root by (uid=0)
>>> in.rshd: root_at_[hidden] as root: cmd='( ! [ -e
>>> ./.profile ]
>>> || . ./.profile; orted --bootproxy 1 --name 0.0.1 --num_procs 3
>> You're running as root? Why is that?
>>> Then the remote process returns command prompt. However orted is
>>> in the
>>> background. The local process is frozen, and just prints:
>>> "Calling init",
>>> which is just before MPI_Init (see greetings.c).
>>> I believe the COMM WORLD cannot be correctly initialized. However
>>> I can't
>>> see which part of my configuration is wrong.
>>> Any help is greatly appreciated.
>> With best regards,
> users mailing list