Hi,
 
Thanks for the information.
 
Here is the output of ompi_info
 
[root@micrompi-1 ompi]# ompi_info
                Open MPI: 1.0a1r6612M
   Open MPI SVN revision: r6612M
                Open RTE: 1.0a1r6612M
   Open RTE SVN revision: r6612M
                    OPAL: 1.0a1r6612M
       OPAL SVN revision: r6612M
                  Prefix: /openmpi
 Configured architecture: x86_64-redhat-linux-gnu
           Configured by: root
           Configured on: Thu Aug  4 23:31:51 IST 2005
          Configure host: micrompi-1
                Built by: root
                Built on: Thu Aug  4 23:43:29 IST 2005
              Built host: micrompi-1
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (all)
      Fortran90 bindings: no
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
            C++ compiler: g++
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: g77
  Fortran77 compiler abs: /usr/bin/g77
      Fortran90 compiler: none
  Fortran90 compiler abs: none
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: no
          C++ exceptions: no
          Thread support: posix (mpi: no, progress: no)
  Internal debug support: yes
     MPI parameter check: runtime
Memory profiling support: yes
Memory debugging support: yes
         libltdl support: 1
           MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
           MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: basic (MCA v1.0, API v1.0, Component v1.0)
                MCA coll: self (MCA v1.0, API v1.0, Component v1.0)
                  MCA io: romio (MCA v1.0, API v1.0, Component v1.0)
               MCA mpool: mvapi (MCA v1.0, API v1.0, Component v1.0)
               MCA mpool: sm (MCA v1.0, API v1.0, Component v1.0)
                 MCA pml: teg (MCA v1.0, API v1.0, Component v1.0)
                 MCA pml: uniq (MCA v1.0, API v1.0, Component v1.0)
                 MCA ptl: self (MCA v1.0, API v1.0, Component v1.0)
                 MCA ptl: sm (MCA v1.0, API v1.0, Component v1.0)
                 MCA ptl: tcp (MCA v1.0, API v1.0, Component v1.0)
                MCA topo: unity (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.0)
                 MCA gpr: replica (MCA v1.0, API v1.0, Component v1.0)
                 MCA iof: proxy (MCA v1.0, API v1.0, Component v1.0)
                 MCA iof: svc (MCA v1.0, API v1.0, Component v1.0)
                  MCA ns: proxy (MCA v1.0, API v1.0, Component v1.0)
                  MCA ns: replica (MCA v1.0, API v1.0, Component v1.0)
                 MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
                 MCA ras: host (MCA v1.0, API v1.0, Component v1.0)
                 MCA rds: hostfile (MCA v1.0, API v1.0, Component v1.0)
                 MCA rds: resfile (MCA v1.0, API v1.0, Component v1.0)
               MCA rmaps: round_robin (MCA v1.0, API v1.0, Component v1.0)
                MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.0)
                MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.0)
                 MCA rml: oob (MCA v1.0, API v1.0, Component v1.0)
                 MCA pls: fork (MCA v1.0, API v1.0, Component v1.0)
                 MCA pls: proxy (MCA v1.0, API v1.0, Component v1.0)
                 MCA pls: rsh (MCA v1.0, API v1.0, Component v1.0)
 
The OpenMPI version that I am using r6612 (I could see from the output ompi_info command). There is NO btl frame where as mpool was built.
 
In the configure options, giving  --with-btl-mvapi=/opt/topspin would sufficient as it has got include and lib64 directories. Therefore it will pick up the necessary things. Also, I have set the following flags
 
 
export CFLAGS="-I/optl/topspin/include -I/opt/topspin/include/vapi"
export LDFLAGS="-lmosal -lvapi -L/opt/topspin/lib64"
export btl_mvapi_LDFLAGS=$LDFLAGS
export btl_mvapi_LIBS=$LDFLAGS
 
I will configure and build the latest code. To get the latest code, I have run the following command. Please let me know if I am not correct.
 
svn co -r6613 http://svn.open-mpi.org/svn/ompi/trunk ompi
 
Configured as..
 
./configure --prefix=/openmpi --with-btl-mvapi=/opt/topspin/
 
When I gave make all, it is configuring again and again, I mean it is going in a loop. In my machine, I do not need libmpga and libmtl_common, hence I removed -lmpga and -lmtl_common entries from config/ompi_check_mvapi.m4 file and then ran autogen.sh.
 
I don't have any clue why the configuration is going in loop even while building. I could see that config.status --recheck is being issued from Makefile and I feel this might the reason for configure to run in loop.
 
 
Can someone help in this?
 
Thanks
-Sridhar


From: devel-bounces@open-mpi.org on behalf of Jeff Squyres
Sent: Thu 8/4/2005 4:29 PM
To: Open MPI Developers
Subject: Re: [O-MPI devel] Fwd: Regarding MVAPI Component in Open MPI

On Aug 4, 2005, at 6:43 AM, Jeff Squyres wrote:

>> I got OpenMPI tar ball and could configure and build on AMD x86_64
>> arch.

Excellent.  Note, however, that it's probably better to get a
Subversion checkout.  As this is the current head of our development
tree, it's a constantly moving target -- having a Subversion checkout
will help you keep up with our progress.

>> In our case, we need to enable MVAPI and disable OpenIB. For this, I
>> have moved .ompi_ignore file from mvapi directory to openib directory.
>> I could see that OpenIB was disabled as the entire openib tree was
>> skipped by the autogen.sh script.

It depends on what version of the tarball you got -- in the version
that I have, the mvapi components (both btl and mpool) do not have
.ompi_ignore files (we recently removed them -- July 27th, r6613).

Additionally, you should not need to run autogen.sh in a tarball (in
fact, autogen.sh should warn you if you try to do this).  autogen.sh is
only required in a Subversion checkout.  Please see the top-level
HACKING file in a Subversion checkout (I don't think that it is
included in the tarball).

Finally, note that you'll need to give additional --with options to
configure to tell it where the MVAPI libraries and header files are
located -- more on this below.

>> While running Pallas accross the nodes, I could see that data is
>> passing over Gigbit ethernet and NOT over Infiniband.  Does anyone has
>> idea about why data is going through Gig and NOT over infiniband? Do I
>> have to set any configuration options? Do I have to give any run-time
>> options? I have tried with mpirun -mca btl mvapi but of no use.

What is the output of the ompi_info command?  This will tell you if the
mvapi component is compiled and installed (it sounds like it is not).

>> I could make out that TCP component is being used and in order to
>> disable tcp, I have copied .ompi_ignore in to directories
>> /ompi/orte/mca/oob/tcp and /ompi/ompi/mca/ptl/tcp. But this time
>> program fails with segmentation fault error.

Right now, IIRC, we don't have checks to ensure that there are valid
paths from one MPI process to another -- which is probably the seg
fault.

Also note that .ompi_ignore is an autogen mechanism.  It is really
intended for developers who want to protect parts of the tree during
development when it is not ready for general use.  It is not really
intended

>> These are the configure options that I have given while configuring
>> OpenMPI.
>>  
>> ./configure --prefix=/openmpi --with-btl-mvapi=/usr/local/topspin/
>> --with-btl-mvapi-libdir=/usr/local/topspin --with-mvapi

Almost correct.  Check out ./configure --help:

   --with-btl-mvapi=MVAPI_DIR
                           Additional directory to search for MVAPI
                           installation
   --with-btl-mvapi-libdir=IBLIBDIR
                           directory where the IB library can be found,
if it
                           is not in MVAPI_DIR/lib or MVAPI_DIR/lib64

The --with-btl-mvapi-libdir flag is only necessary if the MVAPI library
cannot be found the /usr/local/topspin/lib or /usr/local/topspin/lib64.
  There is no --with-mvapi flag.

So it's quite possible that with the wrong value for
--with-btl-mvapi-libdir, it failed to compile the mvapi component
(i.e., I suspect it was looking for /usr/local/topspin/libmosal.* when
libmosal is most likely in /usr/local/topspin/lib or
/usr/local/topspin/lib64), which resulted in Open MPI falling back to
TCP/GigE.

After you install Open MPI, you can run the ompi_info command and it
will show a list of all the installed components.  You should see the
mvapi component in both the btl and mpool frameworks if all went well. 
If it didn't, then send us the output (stdout and stderr) of configure,
the top-level config.log file, and the output from "make all" (please
compress!) and we can have a look to see what went wrong.

Once you have the mvapi components built, you can choose to use them at
run-time via switches to mpirun.  See the slides that we talked through
on the teleconference -- I provided some examples (you can set these
via command line arguments, environment variables, or files).

For one thing, you need to manually specify to use the 3rd generation
p2p stuff in Open MPI -- our 2nd generation is still currently the
default (that will likely change in the near future, but it hasn't been
done yet).  For example:

        mpirun --mca pml ob1 --mca btl mvapi,self -np 4 a.out

This tells the pml to use the "ob1" component (i.e., the 3rd generation
p2p stuff) and to use the mvapi and self btl components (self is
loopback -- one processing sending to itself).

Give that a whirl and let us know how it goes.

--
{+} Jeff Squyres
{+} The Open MPI Project
{+} http://www.open-mpi.org/


_______________________________________________
devel mailing list
devel@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/devel