This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On 7/12/2011 4:45 PM, Mohan, Ashwin wrote:
> I noticed that the exact same code took 50% more time to run on OpenMPI
> than Intel.
It would be good to know if that extra time is spent inside MPI calls or
not. There is a discussion of how you might do this here:
http://www.open-mpi.org/faq/?category=perftools You should probably
start here and narrow down your investigation.
If the difference is the time spent inside MPI calls... um, that would
If the difference is time spent outside MPI calls, how you are compiling
(which serial compiler is being used, which optimization flags, etc.)
could be the issue. Or possibly how processes are placed on a node
("paffinity" or "binding" issues).
> Does the compiler flags have an effect on the efficiency of the
Sure. Ideally, most of the time is spent in parallel computation and
very little in MPI. For performance in such an "ideal" case, any
"decent" MPI implementation (OMPI and Intel hopefully among them) should
do just fine.
> Will including MPICH2 increase efficiency in running simulations using
MPICH2 and OMPI are MPI implementations. You choose one or the other
(or other options... e.g., Intel).