Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] btl_openib_cpc_include rdmacm questions
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-04-21 18:21:33


Over IB, I'm not sure there is much of a drawback. It might be slightly slower to establish QP's, but I don't think that matters much.

Over iWARP, rdmacm can cause connection storms as you scale to thousands of MPI processes.

On Apr 20, 2011, at 5:03 PM, Brock Palen wrote:

> We managed to have another user hit the bug that causes collectives (this time MPI_Bcast() ) to hang on IB that was fixed by setting:
>
> btl_openib_cpc_include rdmacm
>
> My question is if we set this to the default on our system with an environment variable does it introduce any performance or other issues we should be aware of?
>
> Is there a reason we should not use rdmacm?
>
> Thanks!
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/