This FAQ is for Open MPI v4.x and earlier.
If you are looking for documentation for Open MPI v5.x and later, please visit docs.open-mpi.org.
Table of contents:
- What versions of BProc does Open MPI work with?
- What prerequisites are necessary for running an Open MPI job under BProc?
1. What versions of BProc does Open MPI work with? |
BProc support was dropped from Open MPI in the Open MPI v1.3 series.
The last version of Open MPI to include BProc support was Open MPI 1.2.9, which was
released in February of 2009.
As of December 2005, Open MPI supports recent versions of
BProc, such as those found in Clustermatic. We have
not tested with older forks of the BProc project, such as those from
Scyld (now defunct). Since Open MPI's BProc support uses some
advanced support from recent BProc versions, it is somewhat doubtful
(but totally untested) as to whether it would work on Scyld systems.
2. What prerequisites are necessary for running an Open MPI job under BProc? |
In general, they are the same for running Open MPI jobs in
other environments (see this FAQ
category for more general information).
However, with BProc it is worth noting that BProc may not bring all
necessary dynamic libraries with a process when it is migrated to a
back-end compute node. Plus, Open MPI opens components on the fly
(i.e., after the process has started), so if these components are
unavailable on the back-end compute nodes, Open MPI applications may
fail.
In general the Open MPI team recommends one of the following two
solutions when running on BProc clusters (in order):
- Compile Open MPI statically, meaning that Open MPI's libraries
produce static "
.a " libraries and all components are included in
the library (as opposed to dynamic ".so " libraries, and separate
".so " files for each component that is found and loaded at
run-time) so that applications do not need to find any shared
libraries or components when they are migrated to back-end compute
nodes. This can be accomplished by specifying [--enable-static
--disable-shared] to configure when building Open MPI.
- If you do not wish to use static compilation, ensure that Open MPI
is fully installed on all nodes (i.e., the head node and all compute
nodes) in the same directory location. For example, if Open MPI is
installed in
/opt/openmpi-5.0.5 on the head node, ensure that
it is also installed in that same directory on all the compute
nodes.
|