Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Users mailing list

Subject: [MTT users] unable to pull tests
From: Karol Mroz (kmroz_at_[hidden])
Date: 2007-11-14 17:04:27

Hash: SHA1

Hello everyone...

I've been trying to get MTT set up to run tests on the SCTP BTL Brad
Penoff and I (over at UBC) have developed. While I'm able to obtain a
nightly tarball, build and install the Open MPI middleware, I'm unable
to pull any tests for the various svn repositories. Currently I've tried
pulling IBM and Onesided tests as shown in the sample
ompi-core-template.ini file.

Here is the output I see from the console when running with --verbose:
- ----------------------------------------------------------------------
*** MTT: ./mtt -f ../samples/ompi-core-template-kmroz.ini --verbose
*** Reporter initializing
*** Reporter initialized
*** MPI get phase starting
>> MPI get: [mpi get: ompi-nightly-trunk]
   Checking for new MPI sources...
   No new MPI sources
*** MPI get phase complete
*** MPI install phase starting
>> MPI install [mpi install: gcc warnings]
   Installing MPI: [ompi-nightly-trunk] / [1.3a1r16706] / [gcc warnings]...
   Completed MPI install successfully
   Installing MPI: [ompi-nightly-trunk] / [1.3a1r16682] / [gcc warnings]...
   Completed MPI install successfully
   Installing MPI: [ompi-nightly-trunk] / [1.3a1r16723] / [gcc warnings]...
   Completed MPI install successfully
*** MPI install phase complete
*** Test get phase starting
>> Test get: [test get: onesided]
   Checking for new test sources...
- ----------------------------------------------------------------------

As you can see, MTT seems to hang on 'Checking for new test sources.'

I will attach a copy of the .ini file in hopes that someone may be able
to point me in the right direction.

Thanks in advance.

- -----------------------------------------------------------------------
# Copyright (c) 2006-2007 Cisco Systems, Inc. All rights reserved.
# Copyright (c) 2006-2007 Sun Microystems, Inc. All rights reserved.

# Template MTT configuration file for Open MPI core testers. The
# intent for this template file is to establish at least some loose
# guidelines for what Open MPI core testers should be running on a
# nightly basis. This file is not intended to be an exhaustive sample
# of all possible fields and values that MTT offers. Each site will
# undoubtedly have to edit this template for their local needs (e.g.,
# pick compilers to use, etc.), but this file provides a baseline set
# of configurations that we intend you to run.

# OMPI core members will need to edit some values in this file based
# on your local testing environment. Look for comments with "OMPI
# Core:" for instructions on what to change.

# Note that this file is artificially longer than it really needs to
# be -- a bunch of values are explicitly set here that are exactly
# equivalent to their defaults. This is mainly because there is no
# reliable form of documentation for this ini file yet, so the values
# here comprise a good set of what options are settable (although it
# is not a comprehensive set).

# Also keep in mind that at the time of this writing, MTT is still
# under active development and therefore the baselines established in
# this file may change on a relatively frequent basis.

# The guidelines are as follows:
# 1. Download and test nightly snapshot tarballs of at least one of
# the following:
# - the trunk (highest preference)
# - release branches (highest preference is the most recent release
# branch; lowest preference is the oldest release branch)
# 2. Run all 4 correctness test suites from the ompi-tests SVN
# - trivial, as many processes as possible
# - intel tests with all_tests_no_perf, up to 64 processes
# - IBM, as many processes as possible
# - IMB, as many processes as possible
# 3. Run with as many different components as possible
# - PMLs (ob1, dr)
# - BTLs (iterate through sm, tcp, whatever high speed network(s) you
# have, etc. -- as relevant)

# Overall configuration


# OMPI Core: if you are not running in a scheduled environment and you
# have a fixed hostfile for what nodes you'll be running on, fill in
# the absolute pathname to it here. If you do not have a hostfile,
# leave it empty. Example:
# hostfile = /home/me/mtt-runs/mtt-hostfile
# This file will be parsed and will automatically set a valid value
# for &env_max_np() (it'll count the number of lines in the hostfile,
# adding slots/cpu counts if it finds them). The "hostfile" value is
# ignored if you are running in a recognized scheduled environment.
hostfile =

# OMPI Core: if you would rather list the hosts individually on the
# mpirun command line, list hosts here delimited by whitespace (if you
# have a hostfile listed above, this value will be ignored!). Hosts
# can optionally be suffixed with ":num", where "num" is an integer
# indicating how many processes may be started on that machine (if not
# specified, ":1" is assumed). The sum of all of these values is used
# for &env_max_np() at run time. Example (4 uniprocessors):
# hostlist = node1 node2 node3 node4
# Another example (4 2-way SMPs):
# hostlist = node1:2 node2:2 node3:2 node4:2
# The "hostlist" value is ignored if you are running in a scheduled
# environment or if you have specified a hostfile.
hostlist = localhost

# OMPI Core: if you are running in a scheduled environment and want to
# override the scheduler and set the maximum number of processes
# returned by &env_max_procs(), you can fill in an integer here.
max_np = 2

# OMPI Core: Output display preference; the default width at which MTT
# output will wrap.
textwrap = 76

# OMPI Core: After the timeout for a command has passed, wait this
# many additional seconds to drain all output, and then kill it with
# extreme prejiduce.
drain_timeout = 5

# OMPI Core: Whether this invocation of the client is a test of the
# client setup itself. Specifically, this value should be set to true
# (1) if you are testing your MTT client and/or INI file and do not
# want the results included in normal reporting in the MTT central
# results database. Results submitted in "trial" mode are not
# viewable (by default) on the central database, and are automatically
# deleted from the database after a short time period (e.g., a week).
# Setting this value to 1 is exactly equivalent to passing "--trial"
# on the MTT client command line. However, any value specified here
# in this INI file will override the "--trial" setting on the command
# line (i.e., if you set "trial = 0" here in the INI file, that will
# override and cancel the effect of "--trial" on the command line).
# trial = 0

# OMPI Core: Set the scratch parameter here (if you do not want it to
# be automatically set to your current working directory). Setting
# this parameter accomplishes the same thing that the --scratch option
# does.
# scratch = &getenv("HOME")/mtt-scratch

# OMPI Core: Set local_username here if you would prefer to not have
# your local user ID in the MTT database
local_username = kmroz

# OMPI Core: --force can be set here, instead of at the command line.
# Useful for a developer workspace in which it makes no sense to not
# use --force
# force = 1

# OMPI Core: Specify a list of sentinel files that MTT will regularly
# check for. If these files exist, MTT will exit more-or-less
# immediately (i.e., after the current test completes) and report all
# of its results. This is a graceful mechanism to make MTT stop right
# where it is but not lose any results.
# terminate_files = &getenv("HOME")/mtt-stop,&scratch_root()/mtt-stop

# OMPI Core: Specify a default description string that is used in the
# absence of description strings in the MPI install, Test build, and
# Test run sections. The intent of this field is to record variable
# data that is outside the scope, but has effect on the software under
# test (e.g., firmware version of a NIC). If no description string is
# specified here and no description strings are specified below, the
# description data field is left empty when reported.
# description = NIC firmware: &system("get_nic_firmware_rev")

# OMPI Core: Specify a logfile where you want all MTT output to be
# sent in addition to stdout / stderr.
# logfile = /tmp/my-logfile.txt

# OMPI Core: If you have additional .pm files for your own funclets,
# you can have a comma-delimited list of them here. Note that each
# .pm file *must* be a package within the MTT::Values::Functions
# namespace. For example, having a file must include the
# line:
# package MTT::Values::Functions::Cisco;
# If this file contains a perl function named foo, you can invoke this
# functlet as &Cisco::foo(). Note that funclet files are loaded
# almost immediately, so you can use them even for other field values
# in the MTT section.
# funclet_files = /path/to/, /path/to/


# The only module available is the MTTLockServer, and requires running
# the mtt-lock-server executable somewhere. You can leave this
# section blank and there will be no locking.
#module = MTTLockServer
#mttlockserver_host = hostname where mtt-lock-server is running
#mttlockserver_port = integer port number of the mtt-lock-server

# MPI get phase

[MPI get: ompi-nightly-trunk]
mpi_details = Open MPI

module = OMPI_Snapshot
ompi_snapshot_url =

[MPI install: gcc warnings]
mpi_get = ompi-nightly-trunk
save_stdout_on_success = 1
merge_stdout_stderr = 0
bitness = 32

module = OMPI
ompi_vpath_mode = none
# OMPI Core: This is a GNU make option; if you are not using GNU make,
# you'll probably want to delete this field (i.e., leave it to its
# default [empty] value).
ompi_make_all_arguments = -j 4
ompi_make_check = 1
# OMPI Core: You will likely need to update these values for whatever
# compiler you want to use. You can pass any configure flags that you
# want, including those that change which compiler to use (e.g., CC=cc
# CXX=CC F77=f77 FC=f90). Valid compiler names are: gnu, pgi, intel,
# ibm, kai, absoft, pathscale, sun. If you have other compiler names
# that you need, let us know. Note that the compiler_name flag is
# solely for classifying test results; it does not automatically pass
# values to configure to set the compiler.
ompi_compiler_name = gnu
ompi_compiler_version = &get_gcc_version()
ompi_configure_arguments = CFLAGS=-pipe --enable-picky --enable-debug


# MPI run details

[MPI Details: Open MPI]

# MPI tests
exec = mpirun @hosts@ -np &test_np() @mca@ --prefix &test_prefix()
&test_executable() &test_argv()

# ORTE tests
exec:rte = &test_executable() --host &env_hosts() --prefix
&test_prefix() &test_argv()

hosts = &if(&have_hostfile(), "--hostfile " . &hostfile(), \
            &if(&have_hostlist(), "--host " . &hostlist(), ""))

# Example showing conditional substitution based on the MPI get
# section name (e.g., different versions of OMPI have different
# capabilities / bugs).
mca = &enumerate( \
        "--mca btl sm,tcp,self_at_v1_1_mca_params@", \
        "--mca btl tcp,self_at_v1_1_mca_params@")

# Boolean indicating IB connectivity
is_up = &check_ipoib_connectivity()

# Figure out which mca's to use
mca = <<EOT

     # Return cached mca, if we have it
     if (defined(@mca)) {
         return \@mca;

     my @hosts = split /\s+|,/, hostlist_hosts();

     if (scalar(@hosts) < 2) {
         push(@mca, "--mca btl self,sm");
     } else {
         if ($ib_up) {
             push(@mca, "--mca btl self,udapl");
         } else {
             push(@mca, "--mca btl self,tcp");
     return \@mca;

# OMPI v1.1 cannot handle heterogeneous numbers of TCP or OpenIB
# interfaces within a single job. So restrict it to a finite number
# that will be the same across all processes in the job (adjust for
# your own site, of course -- this particular example is meaningless
# if all nodes at your site have a homogeneous type and number of
# network interfaces).
v1_1_mca_params = &if(&eq(&mpi_get_name(), "ompi-nightly-v1.1"), \
        " --mca btl_tcp_if_include eth0 --mca oob_tcp_if_include eth0 --mca
btl_openib_max_btls 1", "")

# Given that part of what we are testing is ORTE itself, using orterun
# to launch something to cleanup can be problematic. We *HIGHLY*
# recommend that you replace the after_each_exec section default value
# below with something that your run-time system can performan
# natively. For example, putting "srun -N $SLURM_NNODES killall -9
# mpirun orted &test_executable()" works nicely on SLURM / Linux
# systems -- assuming that your MTT run has all nodes exclusively to
# itself (i.e., that the "killall" won't kill some legitimate jobs).

# A helper script is installed by the "OMPI" MPI Install module named
# "". This script is orterun-able and will kill
# all rogue orteds on a node and whack any session directories.
# Invoke via orterun just to emphasize that it is not an MPI
# application. The helper script is installed in OMPI's bin dir, so
# it'll automatically be found in the path (because OMPI's bin dir is
# in the path).

after_each_exec = <<EOT
# We can exit if the test passed or was skipped (i.e., there's no need
# to cleanup).
if test "$MTT_TEST_RUN_RESULT" = "passed" -o "$MTT_TEST_RUN_RESULT" =
"skipped"; then
    exit 0

if test "$MTT_TEST_HOSTFILE" != ""; then
    args="--hostfile $MTT_TEST_HOSTFILE"
elif test "$MTT_TEST_HOSTLIST" != ""; then
    args="--host $MTT_TEST_HOSTLIST"
orterun $args -np $MTT_TEST_NP --prefix $MTT_TEST_PREFIX

# Test get phase

#[Test get: ibm]
#module = SVN
#svn_url =
#svn_post_export = <<EOT

[Test get: onesided]
module = SVN
svn_url =
svn_post_export = <<EOT


# Test build phase

[Test build: ibm]
test_get = ibm
save_stdout_on_success = 1
merge_stdout_stderr = 1
stderr_save_lines = 100

module = Shell
shell_build_command = <<EOT
./configure CC=mpicc CXX=mpic++ F77=mpif77

#[Test build: onesided]
#test_get = onesided
#save_stdout_on_success = 1
#merge_stdout_stderr = 1
#stderr_save_lines = 100
# Have the onesided tests skip the OMPI 1.1 testing; MPI-2 one-sided
# just plain doesn't work there and won't be fixed.
#skip_mpi_get = ompi-nightly-v1.1
# Can also have a skip_mpi_install for the same purpose (skip specific
# installs)

#module = Shell
#shell_build_command = <<EOT


# Test Run phase

[Test run: ibm]
test_build = ibm
pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
skipped = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 77))
timeout = &max(30, &multiply(10, &test_np()))
save_stdout_on_pass = 1
merge_stdout_stderr = 1
stdout_save_lines = 100
np = &env_max_procs()

specify_module = Simple
 Similar rationale to the intel test run section
simple_first:tests = &find_executables("collective", "communicator", \
                                       "datatype", "dynamic",
"environment", \
                                       "group", "info", "io", "onesided", \
                                       "pt2pt", "topology")

# Similar rationale to the intel test run section
simple_fail:tests = environment/abort environment/final
simple_fail:pass = &and(&cmd_wifexited(), &ne(&cmd_wexitstatus(), 0))
simple_fail:exclusive = 1
simple_fail:np = &env_max_procs()

#[Test run: onesided]
#test_build = onesided
#pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
#timeout = &max(30, &multiply(10, &test_np()))
#save_stdout_on_pass = 1
#merge_stdout_stderr = 1
#stdout_save_lines = 100
#np = &if(&gt(&env_max_procs(), 0), &step(2, &max(2, &env_max_procs()),
2), 2)

#specify_module = Simple
#simple_pass:tests = &cat("run_list")


# Reporter phase

#[Reporter: IU database]
#module = MTTDatabase

#mttdatabase_realm = OMPI
#mttdatabase_url =
# OMPI Core: Change this to be the username and password for your
# submit user. Get this from the OMPI MTT administrator.
#mttdatabase_username = you must set this value
#mttdatabase_password = you must set this value
# OMPI Core: Change this to be some short string identifying your
# cluster.
#mttdatabase_platform = you must set this value


# This is a backup for while debugging MTT; it also writes results to
# a local text file

#[Reporter: text file backup]
module = TextFile

textfile_filename = $phase-$section-$mpi_name-$mpi_version.txt

textfile_summary_header = <<EOT
hostname: &shell("hostname")
uname: &shell("uname -a")
who am i: &shell("who am i")

#textfile_summary_footer =
#textfile_detail_header =
#textfile_detail_footer =

textfile_textwrap = 78

- --
Karol Mroz
Version: GnuPG v1.4.6 (GNU/Linux)