Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] requirement on ssh when run openmpi
From: meng (qsmeng_at_[hidden])
Date: 2013-07-31 18:45:31

Dear Dani and Reuti,

>> either install openmpi on each node, and setup /etc/profile.d/openmpi.{c,}sh and /etc/ files on both (preferred) or install to a common file system (e.g. nfs mount) and still use profile and ldconfig to setup environment. >
     I choose to install openmpi on each mode.
     But I dont know the difference between the following the two methods in setting PATH. First method, I set PATH ans LD_LIBRARY_PATH in .bashrc and then source .bashrc. The second as Dani suggested. But it seems not easy to set openmpi.csh.

>Where was Open MPI installed to? Maybe you need to set the $PATH for a non-interactive login in your ~/.bashrc to include this location on the slave node.

      I install openmpi at /usr/local/openmpi-1.6.5 on both computers. and the two computers now can access each other without passwd required. I set the path and ld_library_path in .bashrc and source it.
     I still get the problems as before. In detail, the problem is as follows:

 bash: orted: command not found
A daemon (pid 9118) died unexpectedly with status 127 while attempting
to launch so we are aborting.

There may be more information reported by the environment (see above).

This may be because the daemon was unable to find all the needed shared
libraries on the remote node. You may set your LD_LIBRARY_PATH to have the
location of the shared libraries on the remote nodes and this will
automatically be forwarded to the remote nodes.
mpiexec noticed that the job aborted, but has no info as to the process
that caused that situation.
Thank you.
Best regards,