Open MPI logo

Open MPI Announcements Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Announce mailing list

Subject: [Open MPI Announce] Announcing the release of Open MPI version 1.3
From: Tim Mattox (timattox_at_[hidden])
Date: 2009-01-19 16:28:18


The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.3. This release contains many bug fixes, feature
enhancements, and performance improvements over the v1.2 series,
including (but not limited to):

   * MPI2.1 compliant
   * New Notifier framework
   * Additional architectures, OS's and batch schedulers
   * Improved thread safety
   * MPI_REAL16 and MPI_COMPLEX32
   * Improved MPI C++ bindings
   * Valgrind support
   * Updated ROMIO to the version from MPICH2-1.0.7
   * Improved Scalability
     - Process launch times reduced by an order of magnitude
     - sparse groups
     - On-demand connection setup
   * Improved point-to-point latencies
   * Better adaptive algorithms for multi-rail support
   * Additional collective algorithms; improved collective performance
   * Numerous enhancements for OpenFabrics
   * iWARP support
   * Fault Tolerance
     - coordinated checkpoint/restart
     - support for BLCR and self
   * Finer grained resource control and mapping (cores, HCAs, etc)
   * Many other new runtime features
   * Numerous bug fixes

Version 1.3 can be downloaded from the main Open MPI web site or any
of its mirrors (mirrors will be updating shortly).

We strongly recommend that all users upgrade to version 1.3 if possible.

Here are a list of some of the changes in v1.3 as compared the 1.2 series:

- Fixed deadlock issues under heavy messaging scenarios
- Extended the OS X 10.5.x (Leopard) workaround for a problem when
  assembly code is compiled with -g[0-9]. Thanks to Barry Smith for
  reporting the problem. See ticket #1701.
- Disabled MPI_REAL16 and MPI_COMPLEX32 support on platforms where the
  bit representation of REAL*16 is different than that of the C type
  of the same size (usually long double). Thanks to Julien Devriendt
  for reporting the issue. See ticket #1603.
- Increased the size of MPI_MAX_PORT_NAME to 1024 from 36. See ticket #1533.
- Added "notify debugger on abort" feature. See tickets #1509 and #1510.
  Thanks to Seppo Sahrakropi for the bug report.
- Upgraded Open MPI tarballs to use Autoconf 2.63, Automake 1.10.1,
  Libtool 2.2.6a.
- Added missing MPI::Comm::Call_errhandler() function. Thanks to Dave
  Goodell for bringing this to our attention.
- Increased MPI_SUBVERSION value in mpi.h to 1 (i.e., MPI 2.1).
- Changed behavior of MPI_GRAPH_CREATE, MPI_TOPO_CREATE, and several
  other topology functions per MPI-2.1.
- Fix the type of the C++ constant MPI::IN_PLACE.
- Various enhancements to the openib BTL:
  - Added btl_openib_if_[in|ex]clude MCA parameters for
    including/excluding comma-delimited lists of HCAs and ports.
  - Added RDMA CM support, including btl_openib_cpc_[in|ex]clude MCA
    parameters
  - Added NUMA support to only use "near" network adapters
  - Added "Bucket SRQ" (BSRQ) support to better utilize registered
    memory, including btl_openib_receive_queues MCA parameter
  - Added ConnectX XRC support (and integrated with BSRQ)
  - Added btl_openib_ib_max_inline_data MCA parameter
  - Added iWARP support
  - Revamped flow control mechanisms to be more efficient
  - "mpi_leave_pinned=1" is now the default when possible,
    automatically improving performance for large messages when
    application buffers are re-used
- Eliminated duplicated error messages when multiple MPI processes fail
  with the same error.
- Added NUMA support to the shared memory BTL.
- Add Valgrind-based memory checking for MPI-semantic checks.
- Add support for some optional Fortran datatypes (MPI_LOGICAL1,
  MPI_LOGICAL2, MPI_LOGICAL4 and MPI_LOGICAL8).
- Remove the use of the STL from the C++ bindings.
- Added support for Platform/LSF job launchers. Must be Platform LSF
  v7.0.2 or later.
- Updated ROMIO with the version from MPICH2 1.0.7.
- Added RDMA capable one-sided component (called rdma), which
  can be used with BTL components that expose a full one-sided
  interface.
- Added the optional datatype MPI_REAL2. As this is added to the "end of"
  predefined datatypes in the fortran header files, there will not be
  any compatibility issues.
- Added Portable Linux Processor Affinity (PLPA) for Linux.
- Addition of a finer symbols export control via the visibility feature
  offered by some compilers.
- Added checkpoint/restart process fault tolerance support. Initially
  support a LAM/MPI-like protocol.
- Removed "mvapi" BTL; all InfiniBand support now uses the OpenFabrics
  driver stacks ("openib" BTL).
- Added more stringent MPI API parameter checking to help user-level
  debugging.
- The ptmalloc2 memory manager component is now by default built as
  a standalone library named libopenmpi-malloc. Users wanting to
  use leave_pinned with ptmalloc2 will now need to link the library
  into their application explicitly. All other users will use the
  libc-provided allocator instead of Open MPI's ptmalloc2. This change
  may be overridden with the configure option enable-ptmalloc2-internal
- The leave_pinned options will now default to using mallopt on
  Linux in the cases where ptmalloc2 was not linked in. mallopt
  will also only be available if munmap can be intercepted (the
  default whenever Open MPI is not compiled with --without-memory-
  manager.
- Open MPI will now complain and refuse to use leave_pinned if
  no memory intercept / mallopt option is available.
- Add option of using Perl-based wrapper compilers instead of the
  C-based wrapper compilers. The Perl-based version does not
  have the features of the C-based version, but does work better
  in cross-compile environments.