This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
Additional testing seems to show that the problem is related to barriers and how often they poll to determine whether or not it's time to leave. Is there some MCA parameter or environment variable that allows me to control the frequency of polling while in barriers?
From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]] On Behalf Of Price, Brian M (N-KCI)
Sent: Wednesday, December 01, 2010 11:29 AM
To: Open MPI Users
Cc: Stern, Craig J
Subject: EXTERNAL: [OMPI users] Open MPI vs IBM MPI performance help
OpenMPI version: 1.4.3
Platform: IBM P5, 32 processors, 256 GB memory, Symmetric Multi-Threading (SMT) enabled
Application: starts up 48 processes and does MPI using MPI_Barrier, MPI_Get, MPI_Put (lots of transfers, large amounts of data)
Issue: When implemented using Open MPI vs. IBM's MPI ('poe' from HPC Toolkit), the application runs 3-5 times slower.
I suspect that IBM's MPI implementation must take advantage of some knowledge that it has about data transfers that Open MPI is not taking advantage of.