Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-01-02 09:18:33


Welcome back from the holidays! I'll try to catch up on the right-
before-the-holidays e-mail this today...

On Dec 21, 2006, at 6:07 PM, Dennis McRitchie wrote:

> I am trying to build openmpi so that mpicc does not require me to
> set up
> the compiler's environment, and so that any executables built with
> mpicc
> can run without my having to point LD_LIBRARY_PATH to the openmpi lib

We had not really considered this use-case before. The current
assumption (as you undoubtedly figured out) is that on the node where
you're invoking OMPI commands, the PATH/LD_LIBRARY_PATH has been
setup properly.

I'm not saying that we can't change this; I'm just trying to give you
the rationale for why the wrappers are the way they [currently] are.

> directory. I made some unsuccessful attempts to accomplish this
> (which I
> describe below), but after building openmpi using the Intel
> compiler, I
> found the following:
>
> 1) When typing "<path-to-mpicc>/mpicc -showme" I get:
> <path-to-mpicc>/mpicc: error while loading shared libraries:
> libsvml.so:
> cannot open shared object file: No such file or directory
>
> I then set LD_LIBRARY_PATH to point to the Intel compiler
> libraries, and
> now "-showme" works, and returns:
> icc -I/usr/local/openmpi-1.1.2-intel/include
> -I/usr/local/openmpi-1.1.2-intel/include/openmpi -pthread
> -L/usr/local/openmpi-1.1.2-intel/lib -L/usr/ofed/lib -L/usr/ofed/lib64
> -lmpi -lorte -lopal -libverbs -lrt -lpbs -lnsl -lutil

This behavior reflects the current assumption (above).

> However...
>
> 2) When typing "<path-to-mpicc>/mpicc hello.c" I now get:
> ----------------------------------------------------------------------
> --
> --
> The Open MPI wrapper compiler was unable to find the specified
> compiler
> icc in your PATH.
>
> Note that this compiler was either specified at configure time or in
> one of several possible environment variables.
> ----------------------------------------------------------------------
> --
> --
>
> Of course, this is due to the fact that -showme indicates that mpicc
> invokes "icc" instead of "<path-to-icc>/icc". If I now set up the PATH
> to the Intel compiler, it works. However...

Mmm. Yes. Also a good point; another working assumption is that
you're setup for your compiler as well (re: PATH, LD_LIBRARY_PATH,
LM_LICENSE_FILE, ...etc.). OMPI *does* save the absolute pathname of
the compiler, but we had shied away from using it in the wrappers by
default for a few reasons:

1. You may not have the compiler installed in the same location on
all nodes.

2. There may be other factors that need to be setup in the
environment (such as an env variable containing a license file) that
the wrapper compilers are not currently setup to handle.

3. As you noted later, users can specify an absolute path name in CC,
CXX (and friends) to configure and that propagates through. Hence,
users have the choice of specifying the full pathname if they want
to; OMPI's current setup allows you to do it either way.

Additionally, be aware that the wrapper compilers are configurable
via a text file. Check out this section of the FAQ: http://www.open-
mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0

> 3) When I try to run the executable thus created, I get:
> ./a.out: error while loading shared libraries: libmpi.so.0: cannot
> open
> shared object file: No such file or directory
>
> I now need to set LD_LIBRARY_PATH to point to the openmpi lib
> directory.

Correct. The mpirun --prefix option may help here, though (and its
synonym -- providing a full absolute path to mpirun).

> -------------------------------------------------------
> -------------------------------------------------------
>
> To avoid problems (1) and (2), I built openmpi with:
> export CC=/opt/intel/cce/latest/bin/icc
> export CXX=/opt/intel/cce/latest/bin/icpc
> export F77=/opt/intel/fce/latest/bin/ifort
> export FC=/opt/intel/fce/latest/bin/ifort
> export
> LDFLAGS="-Wl,-rpath,/opt/intel/cce/latest/lib,-rpath,/opt/intel/fce/
> late
> st/lib"
>
> But while this satisfied the configure script and all its tests, it
> did
> not produce the results I hoped for.
>
> To avoid problem (3), I added the following option to configure:
> --with-wrapper-ldflags=-Wl,-rpath,/usr/local/openmpi-1.1.2-intel/lib
>
> I was hoping "-showme" would add this to its parameters, but no such
> luck. Looking at the build output, it seems that the
> --with-wrapper-ldflags parameter seems to be parsed differently
> from how
> LDFLAGS gets parsed, and I get a compilation line:
> /opt/intel/cce/latest/bin/icc -O3 -DNDEBUG -fno-strict-aliasing -
> pthread
> -Wl,-rpath -Wl,/opt/intel/cce/latest/lib -Wl,-rpath
> -Wl,/opt/intel/fce/latest/lib -o .libs/opal_wrapper opal_wrapper.o
> ../../../opal/.libs/libopal.so -lnsl -lutil -Wl,--rpath
> -Wl,/usr/local/openmpi-1.1.2-intel/lib
>
> Notice that the rpath preceding the openmpi lib directory is specified
> as "--rpath", which is probably why it is ignored. Is this perhaps a
> bug?

Hmm. I'd have to trace into why that happens; that's pretty weird.
We put the --with-wrapper-*flags [almost] directly into the wrapper
compiler config text files, so there shouldn't be much
transmorgification happening there (the only changes I see is a check
for uniqueness among the flags -- I could be missing something; it's
pretty tangled configure/m4 code).

I was actually unable to reproduce this behavior in the 1.1 and 1.2
series -- the Right flags ended up in the wrapper config file (i.e.,
"-Wl,-rpath,/usr/local/openmpi-1.1.2-intel/lib"). :-(

> Can you help me accomplish any or all of these goals?

Your best bet is probably to manually modify the wrapper config file(s).

I can imagine the possibility of making some of the OMPI commands be
linked statically (upon demand, of course -- so it would be optional
-- and back-end libraries and compilers would have to support it as
well) to avoid problem (1), but I'd need to talk to some of the other
OMPI developers first to make sure I'm not missing something. Our
build system is so flexible and has to adapt to so many systems that
it can be pretty subtle sometimes...

-- 
Jeff Squyres
Server Virtualization Business Unit
Cisco Systems