Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Ralph H Castain (rhc_at_[hidden])
Date: 2006-09-07 12:22:31


Jeff and I talked about this for awhile this morning, and we both agree
(yes, I did change my mind after we discussed all the ramifications). It
appears that we should be able to consolidate the code into a single
component with the right configuration system "magic" - and that would
definitely be preferable.

My primary concern originally was with the lack of knowledge and
documentation on the configuration system. I know that I don't know enough
about that system to make everything work in a single component. The
component method would have allowed you to remain ignorant of that system.
However, with Jeff's willingness to help in that regard, the approach he
recommends would be easier for everyone.

Hope that doesn't cause too much of a problem.
Ralph

On 9/7/06 9:46 AM, "Jeff Squyres" <jsquyres_at_[hidden]> wrote:

> On 9/1/06 12:21 PM, "Adrian Knoth" <adi_at_[hidden]> wrote:
>
>> On Fri, Sep 01, 2006 at 07:01:25AM -0600, Ralph Castain wrote:
>>
>>>> Do you agree to go on with two oob components, tcp and tcp6?
>>> Yes, I think that's the right approach
>>
>> It's a deal. ;)
>
> Actually, I would disagree here (sorry for jumping in late! :-( ).
>
> Given the amount of code duplication, it seems like a big shame to make a
> separate component that is almost identical.
>
> Can we just have one component that handles both ivp4 and ivp6? Appropriate
> #if's can be added (I'm willing to help with the configure.m4 mojo -- the
> stuff to tell OMPI whether ipv4 and/or ipv6 stuff can be found and to set
> the #define's appropriately).
>
> More specifically -- I can help with component / configure / build system
> issues. I'll defer on the whole how-to-wire-them-up issue for the moment
> (I've got some other fires burning that must be tended to :-\ ).
>
> My $0.02: OOB is the first target to get working -- once you can orterun
> non-MPI apps properly across ipv6 and/or ipv4 nodes, then move on to the MPI
> layer and take the same approach there (e.g., one TCP btl with configure.m4
> mojo, etc.).