Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2008-10-08 09:46:21

On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:

> I wanted to switch from mpich2/mvapich2 to OpenMPI, as
> OpenMPI supports both ethernet and infiniband. Before doing that I
> tested an application 'GROMACS' to compare the performance of MPICH2
> & OpenMPI. Both have been compiled with GNU compilers.
> After this benchmark, I came to know that OpenMPI is slower than
> This benchmark is run on a AMD dual core, dual opteron processor.
> Both have compiled with default configurations.
> The job is run on 2 nodes - 8 cores.
> OpenMPI - 25 m 39 s.
> MPICH2 - 15 m 53 s.

A few things:

- What version of Open MPI are you using? Please send the information
listed here:

- Did you specify to use mpi_leave_pinned? Use "--mca
mpi_leave_pinned 1" on your mpirun command line (I don't know if leave
pinned behavior benefits Gromacs or not, but it likely won't hurt)

- Did you enable processor affinity? Use "--mca mpi_paffinity_alone
1" on your mpirun command line.

- Are you sure that Open MPI didn't fall back to ethernet (and not use
IB)? Use "--mca btl openib,self" on your mpirun command line.

- Have you tried compiling Open MPI with something other than GCC?
Just this week, we've gotten some reports from an OMPI member that
they are sometimes seeing *huge* performance differences with OMPI
compiled with GCC vs. any other compiler (Intel, PGI, Pathscale). We
are working to figure out why; no root cause has been identified yet.

Jeff Squyres
Cisco Systems