Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Users mailing list

From: Ethan Mallove (ethan.mallove_at_[hidden])
Date: 2006-10-11 14:34:33


On Tue, Oct/10/2006 10:35:22PM, Jeff Squyres wrote:
> I'm not quite sure I understand the problem. In each phase's section, you
> are supposed to identify the one (or more) predecessor phase sections. For
> example, in MPI install phases, you specify an "mpi_get" field that
> indicates which MPI get phases should be built with this install section:
>
> [MPI Get: foo]
> ...
>
> [MPI Get: bar]
> ....
>
> [MPI Install: whatever]
> Mpi_get = foo,bar
>
> The "whatever" MPI install section will build both the "foo" and "bar" MPI
> get sections. This is also true with test get, build, and run phases.
>
> [Test get: foo]
> ...
>
> [Test build: bar]
> Test_get = foo
>
> [test run: baz]
> Test_build = bar
>
> These "back pointer" fields refer backwards to its parent/predecessor phase.
> They can also be comma-delimited lists of phase names (just like the
> "mpi_get" field in the MPI install phase) to help prevent duplication in the
> ini file.
>
> So MTT does not assume or require that test get, build, and run phases all
> have the same base phase name (e.g., [test get: intel], [test build: intel],
> [test run: intel]). You just have to link up names correctly with the
> corresponding "back pointer" field names.
>
> Having said all that, does this make your problem easier? I'm not entirely
> sure I understand the problem, so I'm not entirely sure that this is the
> answer. :-)
>
>
> On 10/9/06 5:39 PM, "Ethan Mallove" <ethan.mallove_at_[hidden]> wrote:
>
> > To answer my own question, apparently Test Get/Build/Run
> > section labels must indeed match up
> > (http://svn.open-mpi.org/trac/mtt/wiki/MTTOverview). To
> > work within these confines, I am instead breaking up my ini
> > file into several ini files (see below), and have created a
> > wrapper script to cat in only the specific platform/bitness
> > ini files I want to test.
> >
> > trunk.ini
> > v1.0.ini
> > v1.1.ini
> > v1.2.ini
> > ompi-core-template.ini
> > build-intel-i386-32.ini
> > build-intel-i386-64.ini
> > build-intel-sparc-32.ini
> > build-intel-sparc-64.ini
> > mpi-install-i386-32.ini
> > mpi-install-i386-64.ini
> > mpi-install-sparc-32.ini
> > mpi-install-sparc-64.ini
> > reporter.ini
> >
> > E.g.,
> >
> > cat $mttdir/build-intel-$arch-$bit.ini " \
> > $mttdir/mpi-install-$arch-$bit.ini " \
> > $mttdir/ompi-core-template.ini " \
> > $mttdir/reporter.ini " \
> > $mttdir/$branch.ini " | \
> > client/mtt [...]
> > --scratch ./$scratch " \
> > mttdatabase_platform='Sun $bit-bit' " \
> > mpi_get='ompi-nightly-$branch'
> >
> > I think things were more manageable all in one file. I
> > don't suppose there's an easy way to allow this using an ini
> > parameter (e.g., suite_name), versus the section name after
> > the ':'?
> >
> > -Ethan
> >
> >
> >
> > On Mon, Oct/09/2006 10:58:55AM, Ethan Mallove wrote:
> >> My intel tests compile okay, but then do not run.
> >> Here's the salient --debug output:
> >>
> >> ...
> >>>> Test build [test build: intel sparc 32]
> >> Evaluating: intel
> >> Building for [ompi-nightly-v1.2] / [1.2a1r12050] /
> >> [solaris sparc 32] / [intel sparc 32]
> >> Evaluating: Intel_OMPI_Tests
> >> Making dir: tests (cwd:
> >> /workspace/em162155/hpc/mtt/cron/ompi-core-testers/sparc/32/installs/ompi-nig
> >> htly-v1.2/solaris_sparc_32/1.2a1r12050)
> >> tests does not exist -- creating
> >> Making dir: intel_sparc_32 (cwd:
> >> /workspace/em162155/hpc/mtt/cron/ompi-core-testers/sparc/32/installs/ompi-nig
> >> htly-v1.2/solaris_sparc_32/1.2a1r12050/tests)
> >>
> >> ...
> >> OUT:[[[ END OF COMPILE ]]]
> >> OUT:Compile complete. Log in all_tests_no_perf.12950.out
> >> OUT:Start: Mon Oct 9 02:48:19 EDT 2006
> >> OUT:End: Mon Oct 9 03:05:28 EDT 2006
> >> Command complete, exit status: 0
> >> Writing built file:
> >> /workspace/em162155/hpc/mtt/cron/ompi-core-testers/sparc/32/installs/ompi-nig
> >> htly-v1.2/solaris_sparc_32/1.2a1r12050/tests/intel_sparc_32/intel_tests/test_
> >> built.ini
> >> ...
> >> Completed test build successfully
> >> ...
> >>>> Test run [intel]
> >> Evaluating: intel (how come no tests get run?)
> >>>> Test run [ibm]
> >>
> >> Is this because my "Test get" sections do not match my "Test
> >> build" and "Test run" sections?
> >>
> >> [Test get: intel]
> >> [Test build: intel sparc 32]
> >> [Test build: intel sparc 64]
> >> [Test build: intel i386 32]
> >> [Test build: intel i386 64]
> >> [Test run: intel]
> >>

So if I do put the four combinations of platform/bitness in
a single ini file, I then have to do some ugly ini param
overriding to line up the sections, e.g.,:

Command 1)

$ cat /home/em162155/mtt/all.ini |
     client/mtt -p -d -
      [...]
       mpi_get='ompi-nightly-trunk'
       "intel:test_build='intel $arch $bit'"
       "imb:test_build='imb $arch $bit'"
       "ibm:test_build='ibm $arch $bit'"
       "trivial:test_build='trivial $arch $bit'"

I was thinking it would be nice if I could do something
like this in my ini:

[Test get: intel all]
suite_name = intel
[Test build: intel sparc 32]
suite_name = intel
[Test build: intel sparc 64]
suite_name = intel
[Test build: intel i386 32]
suite_name = intel
[Test build: intel i386 64]
suite_name = intel
[Test run: intel all]
suite_name = intel

Then the get/build/run phase are linked by a generic
suite_name, and the following simpler command has the same
effect as "Command 1":

Command 2)

$ cat /home/em162155/mtt/all.ini |
     client/mtt -p -d -
      [...]
       mpi_get='ompi-nightly-trunk'
       --section $arch;$bit
       --section all

-Ethan

> >> If so, it might be nice to get a "no match found" warning
> >> of some kind.
> >>
> >> -Ethan
> >> _______________________________________________
> >> mtt-users mailing list
> >> mtt-users_at_[hidden]
> >> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
> > _______________________________________________
> > mtt-users mailing list
> > mtt-users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems