Over IB, I'm not sure there is much of a drawback. It might be slightly slower to establish QP's, but I don't think that matters much.
Over iWARP, rdmacm can cause connection storms as you scale to thousands of MPI processes.
On Apr 20, 2011, at 5:03 PM, Brock Palen wrote:
> We managed to have another user hit the bug that causes collectives (this time MPI_Bcast() ) to hang on IB that was fixed by setting:
> btl_openib_cpc_include rdmacm
> My question is if we set this to the default on our system with an environment variable does it introduce any performance or other issues we should be aware of?
> Is there a reason we should not use rdmacm?
> Brock Palen
> Center for Advanced Computing
> users mailing list
For corporate legal information go to: