Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Performance: MPICH2 vs OpenMPI
From: Brian Dobbins (bdobbins_at_[hidden])
Date: 2008-10-09 10:24:01


On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres <jsquyres_at_[hidden]> wrote:

> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>> OpenMPI : 120m 6s
>> MPICH2 : 67m 44s
>>
>
> That seems to indicate that something else is going on -- with -np 1, there
> should be no MPI communication, right? I wonder if the memory allocator
> performance is coming into play here.

  I'd be more inclined to double-check how the Gromacs app is being compiled
in the first place - I wouldn't think the OpenMPI memory allocator would
make anywhere near that much difference. Sangamesh, do you know what
command line was used to compile both of these? Someone correct me if I'm
wrong, but *if* MPICH2 embeds optimization flags in the 'mpicc' command and
OpenMPI does not, then if he's not specifying any optimization flags in the
compilation of Gromacs, MPICH2 will pass its embedded ones on to the Gromacs
compile and be faster. I'm rusty on my GCC, too, though - does it default
to an O2 level, or does it default to no optimizations?

  Since the benchmark is readily available, I'll try running it later
today.. didn't get a chance last night.

  Cheers,
  - Brian