Thanks for the response:
> mpirun --prefix /otherdir ...
> This might be good enough to do what you need.
I don't think this will work (or is all that is needed to work). We are
actually already using the --prefix option to mpirun and still run into
When fluent is distributed, we typically package/distribute all the
runtime MPI files that it needs so that the application is
self-contained. The distribution directory for the MPIs is different
from where the MPIs were originally built/installed (and will definitely
be different still when the user installs the application). We use the
common approach of setting LD_LIBRARY_PATH (or --prefix or other) in a
runtime wrapper script to reflect the final installation location and to
pick shared libraries at runtime. Thus, the application (and the MPI)
can be infinitely redistributed/installed and still function. Some MPIs
have a runtime-settable environment variable to define the final MPI
installation location. For instance, HP-MPI use MPI_ROOT to define the
OpenMPI seems to be a little different because the final installation
location seems to be fixed at compile time. When the libraries are
compiled, the installation location is encoded into the OpenMPI shared
libraries by the use of --rpath during linking (it's encode into
libmpi.so and many shared libs under lib/openmpi). Thus, the
installation doesn't seem to be able to be moved after it is originally
For many users this works out well, but an option to build openMPI so
that it has the flexibility to be moved would be very nice :-). I was
able to play with the libtool file to get most of OpenMPI to build
without --rpath (I think ompi_info didn't build), so there may not be
too much involved. Whomever setup the shared library part of the build
process may know exactly what is needed. I can share what I've done if
Jeff Squyres wrote:
>This is certainly the default for GNU Libtool-build shared libraries
>(such as Open MPI). Ralf W -- is there a way to disable this?
>As a sidenote, we added the "--prefix" option to mpirun to be able to
>override the LD_LIBRARY_PATH on remote nodes for circumstances like
>this. For example, say you build/install to /somedir, but then
>distribute Open MPI and the user installs it to /otherdir. I know
>almost nothing about Fluent :-(, but do you wrap the call to mpirun
>in a script/executable somewhere? Such that you could hide:
> mpirun --prefix /otherdir ...
>This might be good enough to do what you need.
>Would that work?
>On Nov 20, 2006, at 2:54 PM, Patrick Jessee wrote:
>>Hello. I'm wondering if anyone knows of a way to get OpenMPI to
>>compile shared libraries without hard-coding the installation
>>directory in them. After compiling and installing OpenMPI, the
>>shared libraries have the installation libraries hard-coded in
>>them. For instance:
>>$ ldd libmpi.so
>> liborte.so.0 => /usr/local/fluent/develop/multiport4.4/
>> libnsl.so.1 => /lib64/libnsl.so.1 (0x0000002a95852000)
>> libutil.so.1 => /lib64/libutil.so.1 (0x0000002a95968000)
>> libm.so.6 => /lib64/tls/libm.so.6 (0x0000002a95a6c000)
>> libpthread.so.0 => /lib64/tls/libpthread.so.0
>> libc.so.6 => /lib64/tls/libc.so.6 (0x0000002a95cd8000)
>> libopal.so.0 => /usr/local/fluent/develop/multiport4.4/
>> /lib64/ld-linux-x86-64.so.2 (0x000000552aaaa000)
>> libdl.so.2 => /lib64/libdl.so.2 (0x0000002a9605a000)
>>In the above, "/usr/local/fluent/develop/multiport4.4/packages/
>>lnamd64/openmpi/openmpi-1.1.2/lib" is hardcoded into libmpi.so
>>using --rpath when libmpi.so is compiled.
>>This is problematic because the installation cannot be moved after
>>it is installed. It is often useful to compile/install libraries
>>on one machine and then move the libraries to a different location
>>on other machines (of course, LD_LIBRARY_PATH or some means then
>>needs to be used to pick up libs are runtime). This relocation is
>>also useful when redistributing the MPI installation with an
>>application. The hard-coded paths prohibit this.
>>I've tried to modify the "--rpath" argument in libtool and opal/
>>libltdl/libtool, but have not gotten this to work.
>>Has anyone else had experience with this? (I'm building OpenMPI
>>1.1.2 on linux x86_64.) Thanks in advance for any potential help.
>>users mailing list