Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] btl_openib_cpc_include rdmacm questions
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-04-21 18:21:33


Over IB, I'm not sure there is much of a drawback. It might be slightly slower to establish QP's, but I don't think that matters much.

Over iWARP, rdmacm can cause connection storms as you scale to thousands of MPI processes.

On Apr 20, 2011, at 5:03 PM, Brock Palen wrote:

> We managed to have another user hit the bug that causes collectives (this time MPI_Bcast() ) to hang on IB that was fixed by setting:
>
> btl_openib_cpc_include rdmacm
>
> My question is if we set this to the default on our system with an environment variable does it introduce any performance or other issues we should be aware of?
>
> Is there a reason we should not use rdmacm?
>
> Thanks!
>
> Brock Palen
> www.umich.edu/~brockp
> Center for Advanced Computing
> brockp_at_[hidden]
> (734)936-1985
>
>
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/