Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] 50% performance reduction due to OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead of using Infiniband
From: Jim Kress (jimkress_58_at_[hidden])
Date: 2009-06-24 11:05:13


Noam, Gus and List,

Did you statically link your openmpi when you built it? If you did (the
default is NOT to do this) then that could explain the discrepancy.

Jim

> -----Original Message-----
> From: users-bounces_at_[hidden]
> [mailto:users-bounces_at_[hidden]] On Behalf Of Noam Bernstein
> Sent: Wednesday, June 24, 2009 9:38 AM
> To: Open MPI Users
> Subject: Re: [OMPI users] 50% performance reduction due to
> OpenMPI v 1.3.2forcing all MPI traffic over Ethernet instead
> of using Infiniband
>
>
> On Jun 23, 2009, at 6:19 PM, Gus Correa wrote:
>
> > Hi Jim, list
> >
> > On my OpenMPI 1.3.2 ompi_info -config gives:
> >
> > Wrapper extra LIBS: -lrdmacm -libverbs -ltorque -lnuma -ldl -Wl,--
> > export-dynamic -lnsl -lutil -lm -ldl
> >
> > Yours doesn't seem to have the IB libraries: -lrdmacm -libverbs
> >
> > So, I would guess your OpenMPI 1.3.2 build doesn't have IB support.
>
> The second of these statements doesn't follow from the first.
>
> My "ompi_info -config" returns
>
> ompi_info -config | grep LIBS
> Build LIBS: -lnsl -lutil -lm
> Wrapper extra LIBS: -ldl -Wl,--export-dynamic
> -lnsl -lutil -
> lm -ldl
>
> But it does have openib
>
> ompi_info | grep openib
> MCA btl: openib (MCA v2.0, API v2.0,
> Component v1.3.2)
>
> and osu_bibw returns
>
> # OSU MPI Bi-Directional Bandwidth Test v3.0
> # Size Bi-Bandwidth (MB/s)
> 4194304 1717.43
>
> which it's sure not getting over ethernet. I think Jeff
> Squyres' test (ompi_info | grep openib) must be more definitive.
>
>
> Noam
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users