Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2 forcing all MPI traffic over Ethernet instead of using Infiniband
From: Pavel Shamis (Pasha) (pashash_at_[hidden])
Date: 2009-06-23 07:24:09


Jim,
Can you please share with us you mca conf file.

Pasha.
Jim Kress ORG wrote:
> For the app I am using, ORCA (a Quantum Chemistry program), when it was
> compiled using openMPI 1.2.8 and run under 1.2.8 with the following in
> the openmpi-mca-params.conf file:
>
> btl=self,openib
>
> the app ran fine with no traffic over my Ethernet network and all
> traffic over my Infiniband network.
>
> However, now that ORCA has been recompiled with openMPI v1.3.2 and run
> under 1.3.2 (using the same openmpi-mca-params.conf file), the
> performance has been reduced by 50% and all the MPI traffic is going
> over the Ethernet network.
>
> As a matter of fact, the openMPI v1.3.2 performance now looks exactly
> like the performance I get if I use MPICH 1.2.7.
>
> Anyone have any ideas:
>
> 1) How could this have happened?
>
> 2) How can I fix it?
>
> a 50% reduction in performance is just not acceptable. Ideas/
> suggestions would be appreciated.
>
> Jim
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>