1.6.1rc1 is a bust because of a compile error. :(
It wasn't caught on the build machine because it's a bug in the openib BTL, and the build machine doesn't have OpenFabrics support.
1.6.1rc2 will be posted later today.
On Jul 27, 2012, at 10:20 PM, Jeff Squyres wrote:
> Finally! It's in the usual place:
> Please test, especially with low-registered-memory-available scenarios with Mellanox OpenFabrics devices.
> Here's a list of changes since 1.6:
> - A bunch of changes to eliminate hangs on OpenFabrics-based networks.
> Users with Mellanox hardware are ***STRONGLY ENCOURAGED*** to check
> their registered memory kernel module settings to ensure that the OS
> will allow registering more than 8GB of memory. See this FAQ item
> for details:
> - Fall back to send/receive semantics if registered memory is
> unavilable for RDMA.
> - Fix two fragment leaks when registered memory is exhausted.
> - Hueristically determine how much registered memory is available
> and warn if it's significantly less than all of RAM.
> - Artifically limit the amount of registered memory each MPI process
> can use to about 1/Nth to total registered memory available.
> - Improve error messages when events occur that are likely due to
> unexpected registered memory exhaustion.
> - Remove the last openib default per-peer receive queue
> specification (and make it an SRQ).
> - Switch the MPI_ALLTOALLV default algorithm to a pairwise exchange.
> - Increase the openib BTL default CQ length to handle more types of
> OpenFabrics devices.
> - Lots of VampirTrace fixes; upgrade to v18.104.22.168.
> - Map MPI_2INTEGER to underlying MPI_INTEGERs, not MPI_INTs.
> - Ensure that the OMPI version number is toleant of handling spaces.
> Thanks to dragonboy for identifying the issue.
> - Fixed IN parameter marking on Fortran "mpi" module
> MPI_COMM_TEST_INTER interface.
> - Various MXM improvements.
> - Make the output of "mpirun --report-bindings" much more friendly /
> - Properly handle MPI_COMPLEX8|16|32.
> - More fixes for mpirun's processor affinity options (--bind-to-core
> and friends).
> - Use aligned memory for OpenFabrics registered memory.
> - Multiple fixes for parameter checking in MPI_ALLGATHERV,
> MPI_REDUCE_SCATTER, MPI_SCATTERV, and MPI_GATHERV. Thanks to the
> mpi4py community (Bennet Fauber, Lisandro Dalcin, Jonathan Dursi).
> - Fixed file positioning overflows in MPI_FILE_GET_POSITION,
> MPI_FILE_GET_POSITION_SHARED, FILE_GET_SIZE, FILE_GET_VIEW.
> - Removed the broken --cpu-set mpirun option.
> - Fix cleanup of MPI errorcodes. Thanks to Alexey Bayduraev for the
> - Fix default hostfile location. Thanks to Götz Waschk for noticing
> the issue.
> - Improve several error messages.
> Jeff Squyres
> For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/