Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Users mailing list

Subject: Re: [MTT users] [OMPI devel] Using MTT to test the newly added SCTP BTL
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-12-06 06:48:06


On Dec 5, 2007, at 1:42 PM, Karol Mroz wrote:

>> Removal of .ompi_ignore should not create build problems for anyone
>> who
>> is running without some form of SCTP support. To test this claim, we
>> built Open MPI with .ompi_ignore removed and no SCTP support on
>> both an
>> ubuntu linux and an OSX machine. Both builds succeeded without any
>> problem.
>
> In light of the above, are there any objections to us removing the
> .ompi_ignore file from the SCTP BTL code?

Thanks for your persistence on this. :-)

I think that since no one has objected, you should feel free to do so.

> I tried to work around this problem by using a pre-installed version
> of
> Open MPI to run MTT tests on (ibm tests initially) but all I get is a
> short summary from MTT that things succeeded, instead of a detailed
> list
> of specific test successes/failures as is shown when using a nightly
> tarball.

MTT has several different reporters; the default "file" reporter
simply outputs a summary to stdout upon completion. The intention is
that the file reporter would be used by developers for quick/
interactive tests to verify that you hadn't broken anything; more
details are available in the meta data files in the scratch tree if
you know where to look.

We intended that MTT's database reporter would usually be used for
common testing, etc. The web interface is [by far] the easiest way to
drill down in the results to see the details of what you need to know
about individual failures, etc.

> The 'tests' also complete much faster which sparks some concern
> as to whether they were actually run.

If you just manually add the sctp btl directory to an existing
tarball, I'm pretty sure that it won't build. OMPI's build system is
highly dependent upon its "autogen" procedure, which creates a hard-
coded list of components to build. For a tarball, that procedure has
already completed, and even if you add in more component directories
after you expand the tarball, the hard-coded lists won't be updated,
and therefore OMPI's configure/build system will skip them.

> Furthermore, MTT puts the source
> into a new 'random' directory prior to building (way around this?),

No. The internal directory structure of the scratch tree, as you
noted, uses random directory names. This is for two reasons:

1. because MTT can't know ahead of time what you are going to tell it
to do
2. one obvious way to have non-random directory names is to use the
names of the INI file sections as various directory levels. However,
this creates Very, Very Long directory names in the scratch tree and
some compilers have a problem with this (even though the total
filenames are within the filesystem limit). Hence, we came up with
the scheme of using short, random directory names that will guarantee
that the total filename length is short.

Note that for human convenience, MTT *also* puts in sym links to the
short random directory names that correspond to the INI section
names. So if a human needs to go into the scratch tree to investigate
some failures, it should be pretty easy to navigate using the sym
links (vs. the short/random names).

> so I
> can't add the SCTP directory by hand, and then run the
> build/installation phase. Adding the code on the fly during the
> installation phase also does not work.
>
> Any advice in this matter?
>
> Thanks again everyone.
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel
>
> --
> Karol Mroz
> kmroz_at_[hidden]
>
>
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel

-- 
Jeff Squyres
Cisco Systems