This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
on our new AMD cluster (AMD Opteron 6274, 2,2GHz) we get very bad latencies
(~1.5us) when performing 0-byte p2p communication on one single node using the
Open MPI sm BTL. When using Platform MPI we get ~0.5us latencies which is
pretty good. The bandwidth results are similar for both MPI implementations
(~3,3GB/s) - this is okay.
One node has 64 cores and 64Gb RAM where it doesn't matter how many ranks
allocated by the application. We get similar results with different number of
We are using Open MPI 1.5.4 which is built by gcc 4.3.4 without any special
configure options except the installation prefix and the location of the LSF
As mentioned at http://www.open-mpi.org/faq/?category=sm we tried to use
/dev/shm instead of /tmp for the session directory, but it had no effect.
Furthermore, we tried the current release candidate 1.5.5rc1 of Open MPI which
provides an option to use the SysV shared memory (-mca shmem sysv) - also this
results in similar poor latencies.
Do you have any idea? Please help!