Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] intel compiler linking issue and issue of environment variable on remote node, with open mpi 1.4.3
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-03-22 13:43:21

On Mar 21, 2011, at 8:21 AM, yanyg_at_[hidden] wrote:

> The issue is that I am trying to build open mpi 1.4.3 with intel
> compiler libraries statically linked to it, so that when we run
> mpirun/orterun, it does not need to dynamically load any intel
> libraries. But what I got is mpirun always asks for some intel
> library(e.g. if I do not put intel library path on library
> search path($LD_LIBRARY_PATH). I checked the open mpi user
> archive, it seems only some kind user mentioned to use
> "-i-static"(in my case) or "-static-intel" in ldflags, this is what I did,
> but it seems not working, and I did not get any confirmation whether
> or not this works for anyone else from the user archive. could
> anyone help me on this? thanks!

Is it Open MPI's executables that require the intel shared libraries at run time, or your application? Keep in mind the difference:

1. Compile/link flags that you specify to OMPI's configure script are used to compile/link Open MPI itself (including executables such as mpirun).

2. mpicc (and friends) use a similar-but-different set of flags to compile and link MPI applications. Specifically, we try to use the minimal set of flags necessary to compile/link, and let the user choose to add more flags if they want to. See this FAQ entry for more details:

> (2) After compiling and linking our in-house codes with open mpi
> 1.4.3, we want to make a minimal list of executables for our codes
> with some from open mpi 1.4.3 installation, without any dependent
> on external setting such as environment variables, etc.
> I orgnize my directory as follows:
> parent---
> |
> package
> |
> bin
> |
> lib
> |
> tools
> In package/ directory are executables from our codes. bin/ has
> mpirun and orted, copied from openmpi installation. lib/ includes
> open mpi libraries, and intel libraries. tools/ includes some c-shell
> scripts to launch mpi jobs, which uses mpirun in bin/.

FWIW, you can use the following OMPI options to configure to eliminate all the OMPI plugins (i.e., locate all that code up in libmpi and friends, vs. being standalone-DSOs):

    --disable-shared --enable-static

This will make libmpi.a (vs. and a bunch of plugins) which your application can statically link against. But it does make a larger executable. Alternatively, you can:


(instead of disable-shared/enable-static) which will make a giant (vs. and all the plugin DSOs). So your MPI app will still dynamically link against libmpi, but all the plugins will be physically located in vs. being dlopen'ed at run time.

> The parent/ directory is on a NFS shared by all nodes of the
> cluster. In ~/.bashrc(shared by all nodes too), I clear PATH and
> LD_LIBRARY_PATH without direct to any directory of open mpi
> 1.4.3 installation.
> First, if I set above bin/ directory to PATH and lib/
> LD_LIBRARY_PATH in ~/.bashrc, our parallel codes(starting by the
> C shell script in tools/) run AS EXPECTED without any problem, so
> that I set other things right.
> Then again, to avoid modifying ~/.bashrc or ~/.profile, I set bin/ to
> PATH and lib/ to LD_LIBRARY_PATH in the C shell script under
> tools/ directory, as:
> setenv PATH /path/to/bin:$PATH
> setenv LD_LIBRARY_PATH /path/to/lib:$LD_LIBRARY_PATH

Instead, you might want to try:

   /path/to/mpirun ...

which will do the same thing as mpirun's --prefix option (see mpirun(1) for details here), and/or use the --enable-mpi-prefix-by-default configure option. This option, as is probably pretty obvious :-), makes mpirun behave as if the --prefix option was specified on the command line, with an argument equal to the $prefix from configure.

Jeff Squyres
For corporate legal information go to: