This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Your doing this on just one node? That would be using the OpenMPI SM
transport, Last I knew it wasn't that optimized though should still
be much faster than TCP.
I am surpised at your result though I do not have MPICH2 on the
cluster right now I don't have time to compare.
How did you run the job?
Center for Advanced Computing
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
> Hi All,
> I wanted to switch from mpich2/mvapich2 to OpenMPI, as
> OpenMPI supports both ethernet and infiniband. Before doing that I
> tested an application 'GROMACS' to compare the performance of
> MPICH2 & OpenMPI. Both have been compiled with GNU compilers.
> After this benchmark, I came to know that OpenMPI is slower than
> This benchmark is run on a AMD dual core, dual opteron processor.
> Both have compiled with default configurations.
> The job is run on 2 nodes - 8 cores.
> OpenMPI - 25 m 39 s.
> MPICH2 - 15 m 53 s.
> Any comments ..?
> users mailing list