On Thursday 07 September 2006 18:42, George Bosilca wrote:
> I still wonder why we need any configuration "magic". We don't want
> to be the only one around supporting IPv4 OR IPv6. Supporting both of
> them simultaneously can be interesting, and it does not require huge
> changes. In fact, we have a problem only at the connection step,
> everything else will be identically.
> In fact, as we're talking about the TCP layer, we might want to
> finish the discussion we had a while ago, about merging the OOB and
> the BTL in one component. They do have very similar functions, and
> right now we have to maintain 2 components. I think it's more than
> time to do the merge, and move the resulting component or whatever
> down in the OPAL layer.
> I even volunteer for that. Next week I will be away, so I will come
> back with a design for the phone conference on ... well beginning of
Sounds the most reasonable solution for me. At the moment the TCP BTL would
have a problem in the case where a Open MPI jobs is spawned across multiple
cells where at least 2 cells have the same private IP address range. In this
scenario a process of one cell could think that a process from the other cell
That's not really an IPv6 specific problem but when we are thinking about
moving the BTL down to the OPAL layer we should take care about that. I'm not
sure if other BTLs have similar problems (e.g. 2 infiniband cells connect via
> On Sep 7, 2006, at 12:22 PM, Ralph H Castain wrote:
> > Jeff and I talked about this for awhile this morning, and we both
> > agree
> > (yes, I did change my mind after we discussed all the
> > ramifications). It
> > appears that we should be able to consolidate the code into a single
> > component with the right configuration system "magic" - and that would
> > definitely be preferable.
> > My primary concern originally was with the lack of knowledge and
> > documentation on the configuration system. I know that I don't know
> > enough
> > about that system to make everything work in a single component. The
> > component method would have allowed you to remain ignorant of that
> > system.
> > However, with Jeff's willingness to help in that regard, the
> > approach he
> > recommends would be easier for everyone.
> > Hope that doesn't cause too much of a problem.
> > Ralph
> > On 9/7/06 9:46 AM, "Jeff Squyres" <jsquyres_at_[hidden]> wrote:
> >> On 9/1/06 12:21 PM, "Adrian Knoth" <adi_at_[hidden]> wrote:
> >>> On Fri, Sep 01, 2006 at 07:01:25AM -0600, Ralph Castain wrote:
> >>>>> Do you agree to go on with two oob components, tcp and tcp6?
> >>>> Yes, I think that's the right approach
> >>> It's a deal. ;)
> >> Actually, I would disagree here (sorry for jumping in late! :-( ).
> >> Given the amount of code duplication, it seems like a big shame to
> >> make a
> >> separate component that is almost identical.
> >> Can we just have one component that handles both ivp4 and ivp6?
> >> Appropriate
> >> #if's can be added (I'm willing to help with the configure.m4 mojo
> >> -- the
> >> stuff to tell OMPI whether ipv4 and/or ipv6 stuff can be found and
> >> to set
> >> the #define's appropriately).
> >> More specifically -- I can help with component / configure / build
> >> system
> >> issues. I'll defer on the whole how-to-wire-them-up issue for the
> >> moment
> >> (I've got some other fires burning that must be tended to :-\ ).
> >> My $0.02: OOB is the first target to get working -- once you can
> >> orterun
> >> non-MPI apps properly across ipv6 and/or ipv4 nodes, then move on
> >> to the MPI
> >> layer and take the same approach there (e.g., one TCP btl with
> >> configure.m4
> >> mojo, etc.).
> > _______________________________________________
> > devel mailing list
> > devel_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/devel
> "Half of what I say is meaningless; but I say it so that the other
> half may reach you"
> Kahlil Gibran
> devel mailing list