Table of contents:
- What operating systems does Open MPI support?
- What hardware platforms does Open MPI support?
- What network interconnects does Open MPI support?
- What run-time environments does Open MPI support?
- Does Open MPI support LSF?
- How much MPI does Open MPI support?
- Is Open MPI thread safe?
- Does Open MPI support 64 bit environments?
- Does Open MPI support execution in heterogeneous environments?
- Does Open MPI support parallel debuggers?
|1. What operating systems does Open MPI support?|
We primarily develop Open MPI on Linux,
OS X, Solaris (both 32 and 64 on all platforms, only pre-v1.8) and
Windows (Windows XP, Windows HPC Server 2003/2008 and also Windows 7 RC,
again only pre-1.8).
Open MPI is fairly POSIX-neutral, so it will run without too many
modifications on most POSIX-like systems. Hence, if we haven't listed
your favorite operating system here, it should not be difficult to get
Open MPI to compile and run properly. The biggest obstacle is
typically the assembly language, but that's fairly modular and we're
happy to provide information about how to port it to new platforms.
It should be noted that we are quite open to accepting patches for
operating systems that we do not currently support. If we do not have
systems to test these on, we probably will only claim to
"unofficially" support those systems.
Microsoft Windows support has been added in v1.3.3, please see the file
NOTE: as of the v1.8 series,
we no longer support either Microsoft Windows or Solaris.
|2. What hardware platforms does Open MPI support?|
Essentially all the common platforms that the operating
systems listed in the previous question support.
For example, Linux runs on a wide variety of platforms, and we
certainly can't claim to support all of them (e.g., Open MPI does not
run in an embedded environment), but we include assembly for support
Intel, AMD, and PowerPC chips, for example.
|3. What network interconnects does Open MPI support?|
Open MPI is based upon a component architecture; support for its MPI
point-to-point functionality only utilize a small number of components
at run-time. Adding native support for a new network interconnect was
specifically designed to be easy.
Here's the list of networks that we natively support for
- TCP / ethernet
- Shared memory
- Loopback (send-to-self)
- Myrinet / GM (pre-1.8 only)
- Myrinet / MX
- Infiniband / OpenIB
- Infiniband / mVAPI (pre-1.8)
- Portals (pre-1.8)
- Portals4 (1.8 and above)
Is there a network that you'd like to see supported that is not shown
above? Contributions are
|4. What run-time environments does Open MPI support?|
Open MPI is layered on top of the Open Run-Time Environment (ORTE),
which originally started as a small portion of the Open MPI code base.
However, ORTE has effectively spun off into its own sub-project.
ORTE is a modular system that was specifically architected to abstract
away the back-end run-time environment (RTE) system, providing a
neutral API to the upper-level Open MPI layer. Components can be
written for ORTE that allow it to natively utilize a wide variety of
ORTE currently natively supports the following run-time environments:
- Recent versions of BProc (e.g., Clustermatic, pre-1.3 only)
- Sun Grid Engine
- PBS Pro, Torque, and Open PBS (the TM system)
- POE (pre-1.8 only)
- rsh / ssh
- XGrid (pre-1.3 only)
- Yod (Red Storm, pre-1.5 only)
Is there a run-time system that you'd like to use Open MPI with that
is not listed above? Component
contributions are welcome!
|5. Does Open MPI support LSF?|
Starting with Open MPI v1.3, yes!
Prior to Open MPI v1.3, Platform released a script-based integration
in the LSF 6.1 and 6.2 maintenance packs around November of 2006. If
you want this integration, please contact your normal Platform support
|6. How much MPI does Open MPI support?|
Open MPI 1.2 supports all of MPI-2.0.
Open MPI 1.3 supports all of MPI-2.1.
Open MPI 1.8 supports all of MPI-3
|7. Is Open MPI thread safe?|
MPI_THREAD_MULTIPLE (i.e., multiple threads
executing within the MPI library) and asynchronous message passing
progress (i.e., continuing message passing operations even while no
user threads are in the MPI library) has been designed into Open MPI
from its first planning meetings.
MPI_THREAD_MULTIPLE is included in the first version of
Open MPI, but it is only lightly tested and likely still has some
bugs. Support for asynchronous progress is included in the TCP
point-to-point device, but it, too, has only had light testing and
likely still has bugs.
Completing the testing for full support of
asynchronous progress is planned in the near future.
|8. Does Open MPI support 64 bit environments?|
Yes, Open MPI is 64 bit clean. You should be able to use Open
MPI on 64 bit architectures and operating systems with no
|9. Does Open MPI support execution in heterogeneous environments?|
As of v1.1, Open MPI requires that the size of C, C++, and
Fortran datatypes be the same on all platforms within a single
parallel application with the exception of types represented by
MPI_LOGICAL -- size differences in these types
between processes are properly handled. Endian differences between
processes in a single MPI job are properly and automatically handled.
Prior to v1.1, Open MPI did not include any support for data size or
|10. Does Open MPI support parallel debuggers?|
Yes. Open MPI supports the TotalView API for parallel process
attaching, which several parallel debuggers support (e.g., DDT, fx2).
As part of v1.2.4 (released in September 2007), Open MPI also supports the
TotalView API for viewing message queues in running MPI processes.
See this FAQ entry for
details on how to run Open MPI jobs under TotalView, and this FAQ entry for
details on how to run Open MPI jobs under DDT.
NOTE: The integration of Open
MPI message queue support is problematic with 64 bit versions of
TotalView prior to v8.3:
- The message queues views will be truncated
- Both the communicators and requests list will be incomplete
- Both the communicators and requests list may be filled with wrong
values (such as an MPI_Send to the destination ANY_SOURCE)
There are two workarounds:
- Use a 32 bit version of TotalView
- Upgrade to TotalView v8.3