Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-04-13 20:05:28


Configure with the --with-devel-headers switch. This will install
all the developer headers.

If you care, check out "./configure --help" -- that shows all the
options available to the configure script (including --with-devel-
headers).

On Apr 13, 2007, at 7:36 PM, pooja_at_[hidden] wrote:

> Hi
>
> I have downloaded the developer version of source code by
> downloading a
> nightly Subversion snapshot tarball.And have installed the openmpi.
> Using
>
> ./configure --prefix=/usr/local
> make all install.
>
> But I want to install with all the development headers.So that I
> can write
> an application that can use Ompi internal headers.
>
>
> Thanks and Regards
> Pooja
>
>
>
>
>
>> On Apr 1, 2007, at 3:12 PM, Ralph Castain wrote:
>>
>>> I can't help you with the BTL question. On the others:
>>
>> Yes, you can "sorta" call BTL's directly from application programs
>> (are you trying to use MPI alongside other communication libraries,
>> and using the BTL components as a sample?), but there are issues
>> involved with this.
>>
>> First, you need to install Open MPI with all the development
>> headers. Open MPI normally only installs "mpi.h" and a small number
>> of other heads; installing *all* the headers will allow you to write
>> applications that use OMPI's internal headers (such as btl.h) while
>> developing outside of the Open MPI source tree.
>>
>> Second, you probably won't want to access the BTL's directly. To
>> make this make sense, here's how the code is organized (even if the
>> specific call sequence is not exactly this layered for performance/
>> optimization reasons):
>>
>> MPI layer (e.g., MPI_SEND)
>> -> PML
>> -> BML
>> -> BTL
>>
>> You have two choices:
>>
>> 1. Go through the PML instead (this is what we do in the MPI
>> collectives, for example) -- but this imposes MPI semantics on
>> sending and receiving, which assumedly you are trying to avoid.
>> Check out ompi/mca/pml/pml.h.
>>
>> 2. Go through the BML instead -- the BTL Management Layer. This is
>> essentially a multiplexor for all the BTLs that have been
>> instantiated. I'm guessing that this is what you want to do
>> (remember that OMPI has true multi-device support; using the BML and
>> multiple BTLs is one of the ways that we do this). Have a look at
>> ompi/mca/bml/bml.h for the interface.
>>
>> There is also currently no mechanism to get the BML and BTL pointers
>> that were instantiated by the PML. However, if you're just doing
>> proof-of-concept code, you can extract these directly from the MPI
>> layer's global variables to see how this stuff works.
>>
>> To have full interoperability of the underlying BTLs and between
>> multiple upper-layer communication libraries (e.g., between OMPI and
>> something else) is something that we have talked about a little, but
>> have not done much work on.
>>
>> To see the BTL interface (just for completeness), see ompi/mca/btl/
>> btl.h.
>>
>> You can probably see the pattern here... In all of Open MPI's
>> frameworks, the public interface is in <level>/mca/<framework>/
>> <framework>.h, where <level> is one of opal, orte, or ompi, and
>> <framework> is the name of the framework.
>>
>>> 1. states are reported via the orte/mca/smr framework. You will see
>>> the
>>> states listed in orte/mca/smr/smr_types.h. We track both process
>>> and job
>>> states. Hopefully, the state names will be somewhat self-
>>> explanatory and
>>> indicative of the order in which they are traversed. The job states
>>> are set
>>> when *all* of the processes in the job reach the corresponding
>>> state.
>>
>> Note that these are very coarse-grained process-level states (e.g.,
>> is a given process running or not?). It's not clear what kind of
>> states you were asking about -- the Open MPI code base has many
>> internal state machines for various message passing and other
>> mechanisms.
>>
>> What information are you looking for, specifically?
>>
>>> 2. I'm not sure what you mean by mapping MPI processes to "physical"
>>> processes, but I assume you mean how do we assign MPI ranks to
>>> processes on
>>> specific nodes. You will find that done in the orte/mca/rmaps
>>> framework. We
>>> currently only have one component in that framework - the round-
>>> robin
>>> implementation - that maps either by slot or by node, as indicated
>>> by the
>>> user. That code is fairly heavily commented, so you hopefully can
>>> understand
>>> what it is doing.
>>>
>>> Hope that helps!
>>> Ralph
>>>
>>>
>>> On 4/1/07 1:32 PM, "pooja_at_[hidden]" <pooja_at_[hidden]>
>>> wrote:
>>>
>>>> Hi
>>>> I am Pooja and I am working on a course project which requires me
>>>> -> to track the internal state changes of MPI and need me to
>>>> figure out
>>>> how does ORTE maps MPi Process to actual physical processes
>>>> ->Also I need to find way to get BTL transports work directly with
>>>> MPI
>>>> level calls.
>>>> I just want to know is this posible and if yes what procedure I
>>>> should
>>>> follow or I should look into which files (for change).
>>>>
>>>>
>>>> Please Help
>>>>
>>>> Thanks and Regards
>>>> Pooja
>>>>
>>>> _______________________________________________
>>>> devel mailing list
>>>> devel_at_[hidden]
>>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>>
>> --
>> Jeff Squyres
>> Cisco Systems
>>
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

-- 
Jeff Squyres
Cisco Systems