It depends a lot on the application and how you ran it. Can you provide some info? For example, if you oversubscribed the node, then we dial down the performance to provide better cpu sharing. Another point: we don't bind processes by default while other MPIs do. Etc.
So more info (like the mpirun command line you used, which version you used, how OMPI was configured, etc.) would help.
On Dec 27, 2011, at 6:35 AM, Eric Feng wrote:
Can anyone help me?
I got similar performance issue when comparing to mvapich2 which is much faster in each MPI function in real application but similar in IMB benchmark.
From: Eric Feng <firstname.lastname@example.org>
To: "email@example.com" <firstname.lastname@example.org>
Sent: Friday, December 23, 2011 9:12 PM
Subject: [OMPI users] Openmpi performance
I am running into performance issue with Open MPI, I wish experts here can provide me some help,
I have one application calls a lot of sendrecv, and isend/irecv, so waitall. When I run Intel MPI, it is around 30% faster than OpenMPI.
However if i test sendrecv using IMB, OpenMPI is even faster than Intel MPI, but when run with real application, Open MPI is much slower than Intel MPI in all MPI functions by looking at profiling results. So this is not some function issue, it has a overall drawback somewhere. Can anyone give me some suggestions of where to tune to make it run faster with real
users mailing email@example.com://www.open-mpi.org/mailman/listinfo.cgi/users
users mailing list