mtt-users_at_[hidden] is where you want to send MTT-related
On Tue, Jan/23/2007 09:12:59AM, Andreas Kn?pfer wrote:
> Hello everyone,
> After our first experiments with MTT I'd like to ask a
> couple of questions. I guess, if I put them here the
> answers will go to the list archive as well. maybe I can
> write a wiki page for the first steps towards MTT, too.
> This leads me to the first question? what's the
> login/passwd for the wiki? And how about the central data
> base for results? probably that answers should not go to
> the list.
There's anonymous read-only access to the wiki.
You need an HTTP account for sending up results to the
database. username's are organization names (e.g., cisco,
sun, lanl, etc.). What do you want as your username?
"tu-dresden" or "zih"?
> So far I'm using text output only. How can I set up an own
> data base? There are some PHP files in the MTT tarball. Do
> they initialize data base and tables and so on? is there
> any script for this? Does is require mysql or postgres or
> any of the two?
Postgres, but the server-side is already taken care of.
There's a centralized database setup at Indiana Univ, and
the MTT test results can be viewed here:
> When I did my own test scripts for a simple test
> application I followed the example INI files provided and
> some pieces of documentation and the Perl code. Is there a
> "real" documentation about all bells and whistles? If not,
> could we write one or at least provide a step-by-step
> guide for beginners?
The wiki is a good starting point. There's links to a bunch
of stuff on the MTT home wiki page here:
The closest thing to a Beginner's Guide to using the MTT
client would be:
> When I want to do some tests whether a test run was
> successful of not, which way would you suggest. Either
> doing it in bash commands placed in the INI file or rather
> write new Perl modules that will be specific to my
> application? The trivial test case does part of both,
> doesn't it?
B. new Perl modules :)
But the test suites that are listed in the
samples/ompi-core-templates.ini file already have modules in
place to Get (e.g., svn export), Build (e.g., make), Run
(mpirun), and Analyze (prepare test results for the
If you add a *new* test suite to ompi-tests, the existing
modules should be able to do all of the above (with the
exception of Analyze, if the test outputs performance data).
You just might need to add some special conditions in the [Test
run: foo] section of the INI file to indicate e.g., whether
some tests are *supposed* to fail, some tests take longer
time span to run, etc.
> Furthermore, there is a caching scheme for test results
> between successive runs. MTT avoids test that were run
> before. Where does it store this information? How to
> explicitly clear it?
The cached data is stored as serialized Perl variables
down in the scratch directory.
The --force option "forces" MTT to run everything,
regardless of whether it's been run before.
> As one of our current goals, I'm integrating VampirTrace
> into the MTT test cycle. The most straight forward way
> would be to have a test application tarball which includes
> VampirTrace sources, wouldn't it? Would it be feasible to
> have two download clauses in a single test-get phase and
> two successive build clauses in a single test-build
> phase? Could this work with the current macros?
I don't know of a user that's done two fetches/compiles in a
single section yet. Why do you need to download/build twice
for a single test suite? (There's nothing preventing this,
just wondering.) I think what you want would look something
[Test get: VampirTrace]
url1 = http://www.foo.com
url2 = http://www.bar.com
module = Tarball
[Test build: VampirTrace]
module = Shell
shell_build_command = <<EOT
Note: all the test suites currently setup to be used with
MTT, are fetched with "svn export" out of the ompi-tests SVN
repository (except the trivial tests, which has its test
sources embedded in MTT). We'd need to create a
Test/Get/Tarball.pm module, if you don't want VampirTrace
added to ompi-tests.
> Thank you very much for your answers!
> Dipl. Math. Andreas Knuepfer,
> Center for Information Services and
> High Performance Computing (ZIH), TU Dresden,
> Willersbau A114, Zellescher Weg 12, 01062 Dresden
> phone +49-351-463-38323, fax +49-351-463-37773
> devel-core mailing list