On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen <email@example.com>
Your doing this on just one node? That would be using the OpenMPI SM transport, Last I knew it wasn't that optimized though should still be much faster than TCP.
its on 2 nodes. I'm using TCP only. There is no infiniband hardware.
I am surpised at your result though I do not have MPICH2 on the cluster right now I don't have time to compare.
How did you run the job?
time /opt/mpich2/gnu/bin/mpirun -machinefile ./mach -np 8 /opt/apps/gromacs333/bin/mdrun_mpi | tee gro_bench_8p
$ time /opt/ompi127/bin/mpirun -machinefile ./mach -np 8 /opt/apps/gromacs333_ompi/bin/mdrun_mpi | tee gromacs_openmpi_8process
Center for Advanced Computing
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers.
After this benchmark, I came to know that OpenMPI is slower than MPICH2.
This benchmark is run on a AMD dual core, dual opteron processor. Both have compiled with default configurations.
The job is run on 2 nodes - 8 cores.
OpenMPI - 25 m 39 s.
MPICH2 - 15 m 53 s.
Any comments ..?