The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to the next release in the stable release series: Open MPI version 1.6.1.
Version 1.6.1 is mainly a bugfix release. All users are encouraged to upgrade to v1.6.1 when possible.
Note that v1.6.1 is ABI compatible with the entire v1.5.x and v1.6.x series, but is not ABI compatible with the v1.4.x series. See http://www.open-mpi.org/software/ompi/versions/ for a description of Open MPI's release methodology.
Version 1.6.1 can be downloaded from the main Open MPI web site or any of its mirrors (Windows binaries will be available shortly; mirrors will also be updating soon).
Here is a list of changes in v1.6.1 as compared to v1.6:
- A bunch of changes to eliminate hangs on OpenFabrics-based networks.
Users with Mellanox hardware are ***STRONGLY ENCOURAGED*** to check
their registered memory kernel module settings to ensure that the OS
will allow registering more than 8GB of memory. See this FAQ item
- Fall back to send/receive semantics if registered memory is
unavilable for RDMA.
- Fix two fragment leaks when registered memory is exhausted.
- Hueristically determine how much registered memory is available
and warn if it's significantly less than all of RAM.
- Artifically limit the amount of registered memory each MPI process
can use to about 1/Nth to total registered memory available.
- Improve error messages when events occur that are likely due to
unexpected registered memory exhaustion.
- Fix double semicolon error in the C++ in <mpi.h>. Thanks to John
Foster for pointing out the issue.
- Allow -Xclang to be specified multiple times in CFLAGS. Thanks to
P. Martin for raising the issue.
- Break up a giant "print *" statement in the ABI-preserving incorrect
MPI_SCATTER interface in the "large" Fortran "mpi" module. Thanks
to Juan Escobar for the initial patch.
- Switch the MPI_ALLTOALLV default algorithm to a pairwise exchange.
- Increase the openib BTL default CQ length to handle more types of
- Lots of VampirTrace fixes; upgrade to v220.127.116.11.
- Map MPI_2INTEGER to underlying MPI_INTEGERs, not MPI_INTs.
- Ensure that the OMPI version number is toleant of handling spaces.
Thanks to dragonboy for identifying the issue.
- Fixed IN parameter marking on Fortran "mpi" module
- Various MXM improvements.
- Make the output of "mpirun --report-bindings" much more friendly /
- Properly handle MPI_COMPLEX8|16|32.
- More fixes for mpirun's processor affinity options (--bind-to-core
- Use aligned memory for OpenFabrics registered memory.
- Multiple fixes for parameter checking in MPI_ALLGATHERV,
MPI_REDUCE_SCATTER, MPI_SCATTERV, and MPI_GATHERV. Thanks to the
mpi4py community (Bennet Fauber, Lisandro Dalcin, Jonathan Dursi).
- Fixed file positioning overflows in MPI_FILE_GET_POSITION,
MPI_FILE_GET_POSITION_SHARED, FILE_GET_SIZE, FILE_GET_VIEW.
- Removed the broken --cpu-set mpirun option.
- Fix cleanup of MPI errorcodes. Thanks to Alexey Bayduraev for the
- Fix default hostfile location. Thanks to Götz Waschk for noticing
- Improve several error messages.
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/