Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] build OpenMPI with OpenIB
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2008-03-07 07:18:02


On Mar 7, 2008, at 5:36 AM, Yuan Wan wrote:

> I want to build OpenMPI-1.2.5 on my Infiniband cluster which has
> OFED-2.1
> installed.
>
> I configured OpenMPI as:
> ----------------------------------------------------------------------------
> ./configure --prefix=/exports/home/local/Cluster-Apps/openmpi/gcc/
> 64/1.2.5 \
> --enable-shared --enable-static --enable-debug \
> --with-openib=/usr/local/Cluster-Apps/infinipath/2.1/ofed
> ----------------------------------------------------------------------------
>
> And 'ompi_info | grep openib' only shows:
>
> MCA btl: openib (MCA v1.0, API v1.0.1, Component v1.2.5)
>
> I cannot see:
>
> MCA mpool: openib (MCA v1.0, API v1.0, Component v1.0)

This is ok; we changed the name of the mpool component somewhere along
the way to be "rdma" (vs. "openib").

I see that you're compiling against the InfiniPath OFED -- is there a
reason you're not building the ipath Open MPI plugins? They should
give better performance than the openib stuff. Try

   --with-psm=/usr/local/Cluster-Apps/infinipath/2.1/

(I don't know if you need to supply more to that path or not)

> No idea why and if this will cause failure.
>
>
> When I tried to run a MPI code with the option "--mca btl
> openib,self", It
> failed to run with the following messages:

Gleb already replied here...

-- 
Jeff Squyres
Cisco Systems