Table of contents:
- How does Open MPI handle HFS+ / UFS filesystems?
- How do I use the Open MPI wrapper compilers in XCode?
- What versions of Open MPI support XGrid?
- How do I run jobs under XGrid?
- Where do I get more information about running under XGrid?
- Is Open MPI included in OS X?
- How do I not use the OS X-bundled Open MPI?
- I am using Open MPI 2.0.x and getting an error at application startup. How do I work around this?
|1. How does Open MPI handle HFS+ / UFS filesystems?|
Generally, Open MPI does not care whether it is running from
an HFS+ or UFS filesystem. However, the C++ wrapper compiler historically
has been called
mpiCC, which of course is the same file
mpicc when running on HFS+. During the
process, Open MPI will attempt to determine if the build filesystem is
case sensitive or not, and assume the install file system is the same
way. Generally, this is all that is needed to deal with HFS+.
However, if you are building on UFS and installing to HFS+, you should
--without-cs-fs to configure to make sure Open
MPI does not build the
mpiCC wrapper. Likewise, if you
build on HFS+ and install to UFS, you may want to specify
--with-cs-fs to ensure that
|2. How do I use the Open MPI wrapper compilers in XCode?|
XCode has a non-public interface for adding compilers to XCode. A
friendly Open MPI user sent in configuration file for XCode 2.3,
MPICC.pbcompspec, which will add
support for the Open MPI wrapper compilers. The file should be
/Library/Application Support/Apple/Developer Tools/Specifications/. Upon starting XCode, this file is loaded and added to the list of
To use the
mpicc compiler, open the project, get info on the
target, click the rules tab, and add a new entry. Change the process rule
for "C source files" and select using MPICC.
Before moving the file, the
ExecPath parameter should be set
to the location of the Open MPI install. The
should be updated to refer to the compiler version that
will invoke -- generally
gcc-4.0 on OS X 10.4 machines.
Thanks to Karl Dockendorf for this information.
|3. What versions of Open MPI support XGrid?|
XGrid is a batch-scheduling technology that was included in
some older versions of OS X. Support for XGrid appeared in the
following versions of Open MPI:
| Open MPI series
|| XGrid supported
|v1.4 and beyond
|4. How do I run jobs under XGrid?|
XGrid support will be built if the XGrid tools are installed.
We unfortunately have little documentation on how to run with XGrid at
this point other than a fairly lengthy e-mail that Brian Barrett wrote
on the Open MPI user's mailing list:
Since Open MPI 1.1.2, we also support authentication using Kerberos.
The process is essentially the same, but there is no need to specify
the XGRID_PASSWORD field. Open MPI applications will then run as
the authenticated user, rather than
|5. Where do I get more information about running under XGrid?|
Please write to us on the user's mailing list. Hopefully any
replies that we send will contain enough information to create proper
FAQ's about how to use Open MPI with XGrid.
|6. Is Open MPI included in OS X?|
Open MPI v1.2.3 was included in some older versions of OS X --
starting with version 10.5 (Leopard). It was removed in more recent
versions of OS X (we're not sure in which version it disappeared --
but your best bet is to simply download
a modern version of Open MPI for your modern version of OS X).
Note, however, that the Leopard does not include a Fortran compiler,
so the OS X-shipped version of Open MPI does not include Fortran
If you need/want Fortran support, you will need to build your own copy
of Open MPI (assumedly when you have a Fortran compiler installed).
The Open MPI team strongly recomends not overwriting the OS
X-installed version of Open MPI, but rather installing it somewhere
|7. How do I not use the OS X-bundled Open MPI?|
There are a few reasons you might not want to use the OS
X-bundled Open MPI, such as wanting Fortran support, upgrading to a
new version, etc.
If you wish to use a community version of Open MPI, You can download
and build Open MPI on OS X just like any other supported platform. We
strongly recomend not replacing the OS X-installed Open MPI, but
rather installing to an alternate location (such as
Once you successfully install Open MPI, ensure to prefix your
with the bindir of Open MPI. This will ensure that you are using
your newly-installed Open MPI, not the OS X-installed Open MPI. For
shell$ wget https://www.open-mpi.org/.../open-mpi....
shell$ tar xf openmpi-<version>.tar.bz2
shell$ cd openmpi-<version>
shell$ ./configure --prefix=/opt/openmpi 2>&1 | tee config.out
[...lots of output...]
shell$ make -j 4 2>&1 | tee make.out
[...lots of output...]
shell$ sudo make install 2>&1 | tee install.out
[...lots of output...]
shell$ export PATH=/opt/openmpi/bin:$PATH
[...see output from newly-installed Open MPI...]
Of course, you'll want to make your
PREFIX changes permanent. One
way to do this is to edit your shell startup
Note that there is no need to add Open MPI's libdir to
LD_LIBRARY_PATH; Open MPI's shared library build process
automatically uses the "rpath" mechanism to automatically find the
correct shared libraries (i.e., the ones associated with this build,
vs., for example, the OS X-shipped OMPI shared libraries). Also note
that we specifically do not recommend adding Open MPI's libdir to
If you build static libraries for Open MPI, there is an ordering
problem such that
/usr/lib/libmpi.dylib will be found before
$libdir/libmpi.a, and therefore user-linked MPI applications that
mpicc (and friends) will use the "wrong" libmpi. This can be
fixed by editing
OMPI's wrapper compilers to force the use of the Right libraries,
such as with the following flag when configuring Open MPI:
shell$ ./configure --with-wrapper-ldflags="-Wl,-search_paths_first" ...
|8. I am using Open MPI 2.0.x and getting an error at application startup. How do I work around this?|
On some versions of Mac OS X / MacOS Sierra, the default
temporary directory location is sufficiently long that it is easy for
an application to create file names for temporary files which exceed
the maximum allowed file name length. With Open MPI, this can lead to
errors like the following at application startup:
shell$ mpirun ... my_mpi_app
[[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../orte/orted/pmix/pmix_server.c at line 264
[[53415,0],0] ORTE_ERROR_LOG: Bad parameter in file ../../../../../orte/mca/ess/hnp/ess_hnp_module.c at line
The workaround for the Open MPI 2.0.x release series is to set the
TMPDIR environment variable to
/tmp or other short directory