Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RDMA CM CPC HG ready again
From: Jon Mason (jon_at_[hidden])
Date: 2008-10-01 11:21:22

On Wed, Oct 01, 2008 at 08:08:48AM -0400, Jeff Squyres wrote:
> Per the call yesterday, I'll merge this into the trunk once I get it
> working with Brad on PPC.
> Brad and I discovered a missing htonl/ntohl somewhere in the code last
> night right before I had to go offline (i.e., we can see the IP
> addresses are backwards, but don't know where it's coming from) on PPC,
> so I haven't finished yet. We'll probably get it fixed up today.

My tests yesterday showed some errors. Unfortunately, I lost the system
before I could take a look. I'll re-run them and verify that everything
is still sane.

> On Sep 30, 2008, at 10:05 AM, Jeff Squyres wrote:
>> (putting this on devel just so that others can see it)
>> Ok, I put in all the things in the RDMA CM CPC HG tree that we've
>> talked about and it now should work out of the box with:
>> - any iwarp (no need for kernel hacks to have initiator send first)
>> - any IB (setup the stuff to do the initiator_depth and
>> responder_resources properly)
>> - any [valid but] bizarre IP addressing scheme
>> Could everyone try the HG tree again to ensure it still/now works for
>> you out of the box?
>> progress/
>> Try with changeset 106 (b046bf97deab) or later. The only thing that
>> is missing is a bit better scalability on allocating buffers for the
>> CTS. Now that all the other changes are in, I'll be working on that
>> today and tomorrow.
>> --
>> Jeff Squyres
>> Cisco Systems
> --
> Jeff Squyres
> Cisco Systems
> _______________________________________________
> devel mailing list
> devel_at_[hidden]