Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
From: Rajeev Thakur (thakur_at_[hidden])
Date: 2008-10-15 12:20:20


For MPICH2 1.0.7, configure with --with-device=ch3:nemesis. That will use
shared memory within a node unlike ch3:sock which uses TCP. Nemesis is the
default in 1.1a1.

Rajeev
 

> Date: Wed, 15 Oct 2008 18:21:17 +0530
> From: "Sangamesh B" <forum.san_at_[hidden]>
> Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
> To: "Open MPI Users" <users_at_[hidden]>
> Message-ID:
> <cb60cbc40810150551sf26acc6qb1113a289ac9de6e_at_[hidden]>
> Content-Type: text/plain; charset="iso-8859-1"
>
> On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins
> <bdobbins_at_[hidden]> wrote:
>
> >
> > Hi guys,
> >
> > On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen
> <brockp_at_[hidden]> wrote:
> >
> >> Actually I had a much differnt results,
> >>
> >> gromacs-3.3.1 one node dual core dual socket opt2218
> openmpi-1.2.7
> >> pgi/7.2
> >> mpich2 gcc
> >>
> >
> > For some reason, the difference in minutes didn't come
> through, it
> > seems, but I would guess that if it's a medium-large
> difference, then it has
> > its roots in PGI7.2 vs. GCC rather than MPICH2 vs. OpenMPI.
> Though, to be
> > fair, I find GCC vs. PGI (for C code) is often a toss-up -
> one may beat the
> > other handily on one code, and then lose just as badly on another.
> >
> > I think my install of mpich2 may be bad, I have never
> installed it before,
> >> only mpich1, OpenMPI and LAM. So take my mpich2 numbers
> with salt, Lots of
> >> salt.
> >
> >
> > I think the biggest difference in performance with
> various MPICH2 install
> > comes from differences in the 'channel' used.. I tend to
> make sure that I
> > use the 'nemesis' channel, which may or may not be the
> default these days.
> > If not, though, most people would probably want it. I
> think it has issues
> > with threading (or did ages ago?), but I seem to recall it being
> > considerably faster than even the 'ssm' channel.
> >
> > Sangamesh: My advice to you would be to recompile
> Gromacs and specify,
> > in the *Gromacs* compile / configure, to use the same
> CFLAGS you used with
> > MPICH2. Eg, "-O2 -m64", whatever. If you do that, I bet
> the times between
> > MPICH2 and OpenMPI will be pretty comparable for your
> benchmark case -
> > especially when run on a single processor.
> >
>
> I reinstalled all softwares with -O3 optimization. Following are the
> performance numbers for a 4 process job on a single node:
>
> MPICH2: 26 m 54 s
> OpenMPI: 24 m 39 s
>
> More details:
>
> $ /home/san/PERF_TEST/mpich2/bin/mpich2version
> MPICH2 Version: 1.0.7
> MPICH2 Release date: Unknown, built on Mon Oct 13 18:02:13 IST 2008
> MPICH2 Device: ch3:sock
> MPICH2 configure: --prefix=/home/san/PERF_TEST/mpich2
> MPICH2 CC: /usr/bin/gcc -O3 -O2
> MPICH2 CXX: /usr/bin/g++ -O2
> MPICH2 F77: /usr/bin/gfortran -O3 -O2
> MPICH2 F90: /usr/bin/gfortran -O2
>
>
> $ /home/san/PERF_TEST/openmpi/bin/ompi_info
> Open MPI: 1.2.7
> Open MPI SVN revision: r19401
> Open RTE: 1.2.7
> Open RTE SVN revision: r19401
> OPAL: 1.2.7
> OPAL SVN revision: r19401
> Prefix: /home/san/PERF_TEST/openmpi
> Configured architecture: x86_64-unknown-linux-gnu
> Configured by: san
> Configured on: Mon Oct 13 19:10:13 IST 2008
> Configure host: locuzcluster.org
> Built by: san
> Built on: Mon Oct 13 19:18:25 IST 2008
> Built host: locuzcluster.org
> C bindings: yes
> C++ bindings: yes
> Fortran77 bindings: yes (all)
> Fortran90 bindings: yes
> Fortran90 bindings size: small
> C compiler: /usr/bin/gcc
> C compiler absolute: /usr/bin/gcc
> C++ compiler: /usr/bin/g++
> C++ compiler absolute: /usr/bin/g++
> Fortran77 compiler: /usr/bin/gfortran
> Fortran77 compiler abs: /usr/bin/gfortran
> Fortran90 compiler: /usr/bin/gfortran
> Fortran90 compiler abs: /usr/bin/gfortran
> C profiling: yes
> C++ profiling: yes
> Fortran77 profiling: yes
> Fortran90 profiling: yes
> C++ exceptions: no
> Thread support: posix (mpi: no, progress: no)
> Internal debug support: no
> MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
> libltdl support: yes
> Heterogeneous support: yes
> mpirun default --prefix: no
>
> Thanks,
> Sangamesh