Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RDMA CM CPC HG ready again
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2008-10-01 08:08:48


Per the call yesterday, I'll merge this into the trunk once I get it
working with Brad on PPC.

Brad and I discovered a missing htonl/ntohl somewhere in the code last
night right before I had to go offline (i.e., we can see the IP
addresses are backwards, but don't know where it's coming from) on
PPC, so I haven't finished yet. We'll probably get it fixed up today.

On Sep 30, 2008, at 10:05 AM, Jeff Squyres wrote:

> (putting this on devel just so that others can see it)
>
> Ok, I put in all the things in the RDMA CM CPC HG tree that we've
> talked about and it now should work out of the box with:
>
> - any iwarp (no need for kernel hacks to have initiator send first)
> - any IB (setup the stuff to do the initiator_depth and
> responder_resources properly)
> - any [valid but] bizarre IP addressing scheme
>
> Could everyone try the HG tree again to ensure it still/now works
> for you out of the box?
>
> http://www.open-mpi.org/hg/hgwebdir.cgi/jsquyres/openib-fd-
> progress/
>
> Try with changeset 106 (b046bf97deab) or later. The only thing that
> is missing is a bit better scalability on allocating buffers for the
> CTS. Now that all the other changes are in, I'll be working on that
> today and tomorrow.
>
> --
> Jeff Squyres
> Cisco Systems
>

-- 
Jeff Squyres
Cisco Systems