Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
From: Brock Palen (brockp_at_[hidden])
Date: 2008-10-10 12:57:34


Actually I had a much differnt results,

gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7
pgi/7.2
mpich2 gcc

19M OpenMPI
     M Mpich2

So for me OpenMPI+pgi was faster, I don't know how you got such a low
mpich2 number.

On the other hand if you do this preprocess before you run:

grompp -sort -shuffle -np 4
mdrun -v

With -sort and -shuffle the OpenMPI run time went down,

12M OpenMPI + sort shuffle

I think my install of mpich2 may be bad, I have never installed it
before, only mpich1, OpenMPI and LAM. So take my mpich2 numbers with
salt, Lots of salt.

On that point though -sort -shuffle may be useful for you, be sure to
understand what they do before you use them.
Read:
http://cac.engin.umich.edu/resources/software/gromacs.html

Last, make sure that your using the single precision version of
gromacs for both runs. the double is about half the speed of the
single.

Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
brockp_at_[hidden]
(734)936-1985

On Oct 10, 2008, at 1:15 AM, Sangamesh B wrote:

>
>
> On Thu, Oct 9, 2008 at 7:30 PM, Brock Palen <brockp_at_[hidden]> wrote:
> Which benchmark did you use?
>
> Out of 4 benchmarks I used d.dppc benchmark.
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
>
> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>
>
> On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres <jsquyres_at_[hidden]>
> wrote:
> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>
> Make sure you don't use a "debug" build of Open MPI. If you use
> trunk, the build system detects it and turns on debug by default.
> It really kills performance. --disable-debug will remove all those
> nasty printfs from the critical path.
>
> You can easily tell if you have a debug build of OMPI with the
> ompi_info command:
>
> shell$ ompi_info | grep debug
> Internal debug support: no
> Memory debugging support: no
> shell$
> Yes. It is "no"
> $ /opt/ompi127/bin/ompi_info -all | grep debug
> Internal debug support: no
> Memory debugging support: no
>
> I've tested GROMACS for a single process (mpirun -np 1):
> Here are the results:
>
> OpenMPI : 120m 6s
>
> MPICH2 : 67m 44s
>
> I'm trying to bulid the codes with PGI, but facing problem with
> compilation of GROMACS.
>
> You want to see "no" for both of those.
>
> --
> Jeff Squyres
> Cisco Systems
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users