Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Devel mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-09-25 10:49:52


You know -- I think the run_random_hostname_patterns.pl thingy was
from the ORTE tests that Sun started but never had a chance to finish
(under ompi-tests/orte).

Were you running something under there, perchance?

Regardless, I think we now know that we can discard these errors from
UH -- they're nothing to worry about.

On Sep 25, 2007, at 10:39 AM, Mohamad Chaarawi wrote:

> attached now ..
>
> Mohamad Chaarawi wrote:
>> Sorry, somehow i missed it!!
>>
>> I don't know what's going on exactly here, Ive never seen the
>> run_random_hostname_patterns.pl value..
>> but i remember that i had some issues with hosts when setting the
>> hosts
>> value. But then i commented them out. i don't know if that's the
>> problem, but i attached also all the ini files that im using..
>>
>>
>> hosts = &if(&have_hostfile(), "--hostfile " . &hostfile(), \
>> &if(&have_hostlist(), "--host " . &hostlist(), ""))
>> # Yes, all these quotes are necessary. Don't mess with them!
>> #hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile
>> ()")", \
>> # "&if(&have_hostlist(), "&join("--host ", "&hostlist
>> ()")", "")")
>>
>>
>> Jeff Squyres wrote:
>>> Mohamad --
>>>
>>> Did you have a chance to look at this?
>>>
>>>
>>> On Sep 19, 2007, at 7:30 AM, Jeff Squyres wrote:
>>>
>>>> That's weird; I don't know.
>>>>
>>>> Mohamad: can you send a snipit of your INI file that sets up the
>>>> value
>>>> "run_random_hostname_patterns.pl"? I'm curious to see how it's
>>>> propagating down into the resource_manager value...
>>>>
>>>>
>>>> On Sep 18, 2007, at 8:43 PM, Josh Hursey wrote:
>>>>
>>>>> This is weird. How could the script "./wrappers/
>>>>> run_random_hostname_patterns.pl" be submitted as the 'launcher' to
>>>>> the database? I thought submit.php would only get valid launchers
>>>>> from the client?
>>>>>
>>>>> -- Josh
>>>>>
>>>>> On Sep 18, 2007, at 8:05 PM, jjhursey_at_[hidden] wrote:
>>>>>
>>>>>> SQL QUERY: INSERT INTO test_run_command
>>>>>> (test_run_command_id, launcher, resource_mgr, parameters,
>>>>>> network,
>>>>>> test_run_network_id) VALUES
>>>>>> ('132', './wrappers/run_random_hostname_patterns.pl',
>>>>>> 'slurm', '',
>>>>>> '', '2')
>>>>>> SQL ERROR: ERROR: value too long for type character varying(16)
>>>>>> SQL ERROR:
>>>>> _______________________________________________
>>>>> mtt-devel mailing list
>>>>> mtt-devel_at_[hidden]
>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-devel
>>>>
>>>> --Jeff Squyres
>>>> Cisco Systems
>>>>
>>>>
>>>
>>> --Jeff Squyres
>>> Cisco Systems
>>>
>>
>>
>
>
> --
> Mohamad Chaarawi
> Instructional Assistant http://www.cs.uh.edu/~mschaara
> Department of Computer Science University of Houston
> 4800 Calhoun, PGH Room 526 Houston, TX 77204, USA
> #
> # Copyright (c) 2006-2007 Cisco Systems, Inc. All rights reserved.
> # Copyright (c) 2006-2007 Sun Microystems, Inc. All rights reserved.
> #
>
> # Template MTT configuration file for Open MPI core testers. The
> # intent for this template file is to establish at least some loose
> # guidelines for what Open MPI core testers should be running on a
> # nightly basis. This file is not intended to be an exhaustive sample
> # of all possible fields and values that MTT offers. Each site will
> # undoubtedly have to edit this template for their local needs (e.g.,
> # pick compilers to use, etc.), but this file provides a baseline set
> # of configurations that we intend you to run.
>
> # OMPI core members will need to edit some values in this file based
> # on your local testing environment. Look for comments with "OMPI
> # Core:" for instructions on what to change.
>
> # Note that this file is artificially longer than it really needs to
> # be -- a bunch of values are explicitly set here that are exactly
> # equivalent to their defaults. This is mainly because there is no
> # reliable form of documentation for this ini file yet, so the values
> # here comprise a good set of what options are settable (although it
> # is not a comprehensive set).
>
> # Also keep in mind that at the time of this writing, MTT is still
> # under active development and therefore the baselines established in
> # this file may change on a relatively frequent basis.
>
> # The guidelines are as follows:
> #
> # 1. Download and test nightly snapshot tarballs of at least one of
> # the following:
> # - the trunk (highest preference)
> # - release branches (highest preference is the most recent release
> # branch; lowest preference is the oldest release branch)
> # 2. Run all 4 correctness test suites from the ompi-tests SVN
> # - trivial, as many processes as possible
> # - intel tests with all_tests_no_perf, up to 64 processes
> # - IBM, as many processes as possible
> # - IMB, as many processes as possible
> # 3. Run with as many different components as possible
> # - PMLs (ob1, dr)
> # - BTLs (iterate through sm, tcp, whatever high speed network
> (s) you
> # have, etc. -- as relevant)
>
> #=====================================================================
> =
> # Overall configuration
> #=====================================================================
> =
>
> [MTT]
>
> # OMPI Core: if you are not running in a scheduled environment and you
> # have a fixed hostfile for what nodes you'll be running on, fill in
> # the absolute pathname to it here. If you do not have a hostfile,
> # leave it empty. Example:
> # hostfile = /home/me/mtt-runs/mtt-hostfile
> # This file will be parsed and will automatically set a valid value
> # for &env_max_np() (it'll count the number of lines in the hostfile,
> # adding slots/cpu counts if it finds them). The "hostfile" value is
> # ignored if you are running in a recognized scheduled environment.
> hostfile =
>
> # OMPI Core: if you would rather list the hosts individually on the
> # mpirun command line, list hosts here delimited by whitespace (if you
> # have a hostfile listed above, this value will be ignored!). Hosts
> # can optionally be suffixed with ":num", where "num" is an integer
> # indicating how many processes may be started on that machine (if not
> # specified, ":1" is assumed). The sum of all of these values is used
> # for &env_max_np() at run time. Example (4 uniprocessors):
> # hostlist = node1 node2 node3 node4
> # Another example (4 2-way SMPs):
> # hostlist = node1:2 node2:2 node3:2 node4:2
> # The "hostlist" value is ignored if you are running in a scheduled
> # environment or if you have specified a hostfile.
> hostlist =
>
> # OMPI Core: if you are running in a scheduled environment and want to
> # override the scheduler and set the maximum number of processes
> # returned by &env_max_procs(), you can fill in an integer here.
> max_np =
>
> # OMPI Core: Output display preference; the default width at which MTT
> # output will wrap.
> textwrap = 76
>
> # OMPI Core: After the timeout for a command has passed, wait this
> # many additional seconds to drain all output, and then kill it with
> # extreme prejiduce.
> drain_timeout = 5
>
> # OMPI Core: Whether this invocation of the client is a test of the
> # client setup itself. Specifically, this value should be set to true
> # (1) if you are testing your MTT client and/or INI file and do not
> # want the results included in normal reporting in the MTT central
> # results database. Results submitted in "trial" mode are not
> # viewable (by default) on the central database, and are automatically
> # deleted from the database after a short time period (e.g., a week).
> # Setting this value to 1 is exactly equivalent to passing "--trial"
> # on the MTT client command line. However, any value specified here
> # in this INI file will override the "--trial" setting on the command
> # line (i.e., if you set "trial = 0" here in the INI file, that will
> # override and cancel the effect of "--trial" on the command line).
> # trial = 0
>
> # OMPI Core: Set the scratch parameter here (if you do not want it to
> # be automatically set to your current working directory). Setting
> # this parameter accomplishes the same thing that the --scratch option
> # does.
> # scratch = &getenv("HOME")/mtt-scratch
>
> # OMPI Core: Set local_username here if you would prefer to not have
> # your local user ID in the MTT database
> # local_username =
>
> # OMPI Core: --force can be set here, instead of at the command line.
> # Useful for a developer workspace in which it makes no sense to not
> # use --force
> # force = 1
>
> # OMPI Core: Specify a list of sentinel files that MTT will regularly
> # check for. If these files exist, MTT will exit more-or-less
> # immediately (i.e., after the current test completes) and report all
> # of its results. This is a graceful mechanism to make MTT stop right
> # where it is but not lose any results.
> # terminate_files = &getenv("HOME")/mtt-stop,&scratch_root()/mtt-stop
>
> # OMPI Core: Specify a default description string that is used in the
> # absence of description strings in the MPI install, Test build, and
> # Test run sections. The intent of this field is to record variable
> # data that is outside the scope, but has effect on the software under
> # test (e.g., firmware version of a NIC). If no description string is
> # specified here and no description strings are specified below, the
> # description data field is left empty when reported.
> # description = NIC firmware: &system("get_nic_firmware_rev")
>
> # OMPI Core: Specify a logfile where you want all MTT output to be
> # sent in addition to stdout / stderr.
> # logfile = /tmp/my-logfile.txt
>
> # OMPI Core: If you have additional .pm files for your own funclets,
> # you can have a comma-delimited list of them here. Note that each
> # .pm file *must* be a package within the MTT::Values::Functions
> # namespace. For example, having a Cisco.pm file must include the
> # line:
> #
> # package MTT::Values::Functions::Cisco;
> #
> # If this file contains a perl function named foo, you can invoke this
> # functlet as &Cisco::foo(). Note that funclet files are loaded
> # almost immediately, so you can use them even for other field values
> # in the MTT section.
> # funclet_files = /path/to/file1.pm, /path/to/file2.pm
>
> #---------------------------------------------------------------------
> -
>
> [Lock]
> # The only module available is the MTTLockServer, and requires running
> # the mtt-lock-server executable somewhere. You can leave this
> # section blank and there will be no locking.
> #module = MTTLockServer
> #mttlockserver_host = hostname where mtt-lock-server is running
> #mttlockserver_port = integer port number of the mtt-lock-server
>
> #=====================================================================
> =
> # MPI get phase
> #=====================================================================
> =
>
> [MPI get: ompi-nightly-trunk]
> mpi_details = Open MPI
>
> module = OMPI_Snapshot
> ompi_snapshot_url = http://www.open-mpi.org/nightly/trunk
>
> #=====================================================================
> =
> # Install MPI phase
> #=====================================================================
> =
>
> [MPI install: gcc warnings]
> mpi_get = ompi-nightly-trunk
> save_stdout_on_success = 1
> merge_stdout_stderr = 0
> bitness = 32
>
> module = OMPI
> ompi_vpath_mode = none
> # OMPI Core: This is a GNU make option; if you are not using GNU make,
> # you'll probably want to delete this field (i.e., leave it to its
> # default [empty] value).
> ompi_make_check = 1
> # OMPI Core: You will likely need to update these values for whatever
> # compiler you want to use. You can pass any configure flags that you
> # want, including those that change which compiler to use (e.g., CC=cc
> # CXX=CC F77=f77 FC=f90). Valid compiler names are: gnu, pgi, intel,
> # ibm, kai, absoft, pathscale, sun. If you have other compiler names
> # that you need, let us know. Note that the compiler_name flag is
> # solely for classifying test results; it does not automatically pass
> # values to configure to set the compiler.
> ompi_compiler_name = gnu
> ompi_compiler_version = &get_gcc_version()
> ompi_configure_arguments = --enable-picky --enable-debug --enable-
> sparse-groups --with-openib=/usr/local/ofed
>
> #=====================================================================
> =
> # MPI run details
> #=====================================================================
> =
>
> [MPI Details: Open MPI]
>
> # MPI tests
> exec = mpirun @hosts@ -np &test_np() @mca@ --prefix &test_prefix()
> &test_executable() &test_argv()
>
> # ORTE tests
> exec:rte = &test_executable() --host &env_hosts() --prefix
> &test_prefix() &test_argv()
>
> #hosts = &if(&have_hostfile(), "--hostfile " . &hostfile(), \
> # &if(&have_hostlist(), "--host " . &hostlist(), ""))
> # Yes, all these quotes are necessary. Don't mess with them!
> #hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile
> ()")", \
> # "&if(&have_hostlist(), "&join("--host ", "&hostlist
> ()")", "")")
>
> # Example showing conditional substitution based on the MPI get
> # section name (e.g., different versions of OMPI have different
> # capabilities / bugs).
> mca = &enumerate( \
> "--mca btl openib,sm,self --mca mpi_leave_pinned 1")
>
> # Boolean indicating IB connectivity
> is_up = &check_ipoib_connectivity()
>
> # Figure out which mca's to use
> mca = <<EOT
> &perl('
>
> # Return cached mca, if we have it
> if (defined(@mca)) {
> return \@mca;
> }
>
> my @hosts = split /\s+|,/, hostlist_hosts();
>
> if (scalar(@hosts) < 2) {
> push(@mca, "--mca btl self,sm");
> } else {
> if ($ib_up) {
> push(@mca, "--mca btl self,openib");
> } else {
> push(@mca, "--mca btl self,tcp");
> }
> }
> return \@mca;
> ')
> EOT
>
> #---------------------------------------------------------------------
> -
> # WARNING: THIS DEFAULT after_each_exec STEP IS PRONE TO FAILURE!
> # Given that part of what we are testing is ORTE itself, using orterun
> # to launch something to cleanup can be problematic. We *HIGHLY*
> # recommend that you replace the after_each_exec section default value
> # below with something that your run-time system can performan
> # natively. For example, putting "srun -N $SLURM_NNODES killall -9
> # mpirun orted &test_executable()" works nicely on SLURM / Linux
> # systems -- assuming that your MTT run has all nodes exclusively to
> # itself (i.e., that the "killall" won't kill some legitimate jobs).
> #---------------------------------------------------------------------
> -
>
> # A helper script is installed by the "OMPI" MPI Install module named
> # "mtt_ompi_cleanup.pl". This script is orterun-able and will kill
> # all rogue orteds on a node and whack any session directories.
> # Invoke via orterun just to emphasize that it is not an MPI
> # application. The helper script is installed in OMPI's bin dir, so
> # it'll automatically be found in the path (because OMPI's bin dir is
> # in the path).
> #after_each_exec = &if(&ne("", &getenv("SLURM_NNODES")), "srun -N
> &getenv("SLURM_NNODES")") ~/mtt-testing/after_each_exec.pl
> after_each_exec = srun -N $SLURM_NNODES killall -9 mpirun orted
> &test_executable()
>
> #=====================================================================
> =
> # Test get phase
> #=====================================================================
> =
>
> [Test get: ibm]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/ibm
> svn_post_export = <<EOT
> ./autogen.sh
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test get: onesided]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/onesided
> svn_post_export = <<EOT
> ./autogen.sh
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test get: mpicxx]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/cxx-test-suite
> svn_post_export = <<EOT
> ./autogen.sh
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test get: imb]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/IMB_2.3
>
> #---------------------------------------------------------------------
> -
>
> [Test get: netpipe]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/NetPIPE_3.6.2
>
> #---------------------------------------------------------------------
> -
>
> [Test get: orte]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/orte
>
> #=====================================================================
> =
> # Test build phase
> #=====================================================================
> =
>
> [Test build: ibm]
> test_get = ibm
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> ./configure CC=mpicc CXX=mpic++ F77=mpif77
> make
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: onesided]
> test_get = onesided
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> ./configure
> make
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: mpicxx]
> test_get = mpicxx
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
>
> module = Shell
> shell_build_command = <<EOT
> ./configure CC=mpicc CXX=mpic++
> make
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: imb]
> test_get = imb
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> cd src
> make clean IMB-MPI1
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: netpipe]
> test_get = netpipe
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> make mpi
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: orte]
> test_get = orte
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> gmake
> EOT
>
> #=====================================================================
> =
> # Test Run phase
> #=====================================================================
> =
>
> [Test run: ibm]
> test_build = ibm
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> skipped = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 77))
> timeout = &max(30, &multiply(10, &test_np()))
> save_stdout_on_pass = 1
> merge_stdout_stderr = 1
> stdout_save_lines = 100
> np = &env_max_procs()
>
> specify_module = Simple
> # Similar rationale to the intel test run section
> simple_first:tests = &find_executables("collective", "communicator", \
> "datatype", "dynamic",
> "environment", \
> "group", "info", "io",
> "onesided", \
> "pt2pt", "topology")
>
> # Similar rationale to the intel test run section
> simple_fail:tests = environment/abort environment/final
> simple_fail:pass = &and(&cmd_wifexited(), &ne(&cmd_wexitstatus(), 0))
> simple_fail:exclusive = 1
> simple_fail:np = &env_max_procs()
>
> #---------------------------------------------------------------------
> -
>
> [Test run: onesided]
> test_build = onesided
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> timeout = &max(30, &multiply(10, &test_np()))
> save_stdout_on_pass = 1
> merge_stdout_stderr = 1
> stdout_save_lines = 100
> np = &if(&gt(&env_max_procs(), 0), &step(2, &max(2, &env_max_procs
> ()), 2), 2)
>
> specify_module = Simple
> simple_pass:tests = &cat("run_list")
>
> #---------------------------------------------------------------------
> -
>
> [Test run: mpicxx]
> test_build = mpicxx
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> timeout = &max(30, &multiply(10, &test_np()))
> save_stdout_on_pass = 1
> merge_stdout_stderr = 1
> argv = &if(&eq("&mpi_get_name()", "ompi-nightly-v1.1"), "-nothrow",
> "")
> np = &env_max_procs()
>
> specify_module = Simple
> simple_pass:tests = src/mpi2c++_test src/mpi2c++_dynamics_test
>
> #---------------------------------------------------------------------
> -
>
> [Test run: imb correctness]
> test_build = imb
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> timeout = &max(1800, &multiply(50, &test_np()))
> save_stdout_on_pass = 1
> merge_stdout_stderr = 1
> stdout_save_lines = 100
> np = &env_max_procs()
>
> specify_module = Simple
> simple_only:tests = src/IMB-MPI1
>
> #---------------------------------------------------------------------
> -
>
> [Test run: imb performance]
> test_build = imb
> pass = &eq(&cmd_wexitstatus(), 0)
> timeout = -1
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
>
> argv = -npmin &test_np() &enumerate("PingPong", "PingPing",
> "Sendrecv", "Exchange", "Allreduce", "Reduce", "Reduce_scatter",
> "Allgather", "Allgatherv", "Alltoall", "Bcast", "Barrier")
>
> specify_module = Simple
> analyze_module = IMB
> simple_pass:tests = src/IMB-MPI1
>
> #---------------------------------------------------------------------
> -
>
> [Test run: netpipe]
> test_build = netpipe
> pass = &eq(&cmd_wexitstatus(), 0)
> timeout = -1
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
> # NetPIPE is ping-pong only, so we only need 2 procs
> np = 2
>
> specify_module = Simple
> analyze_module = NetPipe
> simple_pass:tests = NPmpi
>
> #---------------------------------------------------------------------
> -
>
> [Test run: orte]
> test_build = orte
> pass = &eq(&test_wexitstatus(), 0)
>
> # Give these tests a good long time to run.
> # (E.g., one orte test runs through a long series of
> # hostname patterns)
> timeout = 5:00
> save_stdout_on_pass = 1
> merge_stdout_stderr = 1
> np = &if(&gt(&env_max_procs(), 0), &step(2, &max(2, &env_max_procs
> ()), 2), 2)
>
> module = Simple
> specify_module = Simple
>
> mpi_details_exec = rte
>
> simple_only:tests = &find_executables("./wrappers")
> simple_only_if_exec_exists = 1
>
> #=====================================================================
> =
> # Reporter phase
> #=====================================================================
> =
>
> [Reporter: IU database]
> module = MTTDatabase
>
> mttdatabase_realm = OMPI
> mttdatabase_url = https://www.open-mpi.org/mtt/submit/
> # OMPI Core: Change this to be the username and password for your
> # submit user. Get this from the OMPI MTT administrator.
> mttdatabase_username = uh
> mttdatabase_password = mttdatabase_password = &stringify(&cat("/
> home/mschaara/mtt-testing/mtt-db-password.txt"))
> # OMPI Core: Change this to be some short string identifying your
> # cluster.
> mttdatabase_platform = shark
>
> #---------------------------------------------------------------------
> -
>
> # This is a backup for while debugging MTT; it also writes results to
> # a local text file
>
> [Reporter: text file backup]
> module = TextFile
>
> textfile_filename = $phase-$section-$mpi_name-$mpi_version.txt
>
> textfile_summary_header = <<EOT
> hostname: &shell("hostname")
> uname: &shell("uname -a")
> who am i: &shell("who am i")
> EOT
>
> textfile_summary_footer =
> textfile_detail_header =
> textfile_detail_footer =
>
> textfile_textwrap = 78
> #=====================================================================
> =
> # Generic OMPI core performance testing template configuration
> #=====================================================================
> =
>
> [MTT]
> # Leave this string so that we can identify this data subset in the
> # database
> # OMPI Core: Use a "test" label until we're ready to run real results
> description = [testbake]
> #description = [2007 collective performance bakeoff]
> # OMPI Core: Use the "trial" flag until we're ready to run real
> results
> trial = 1
>
> # Put other values here as relevant to your environment.
>
> #---------------------------------------------------------------------
> -
>
> [Lock]
> # Put values here as relevant to your environment.
>
> #=====================================================================
> =
> # MPI get phase
> #=====================================================================
> =
>
> [MPI get: ompi-nightly-trunk]
> mpi_details = OMPI
>
> module = OMPI_Snapshot
> ompi_snapshot_url = http://www.open-mpi.org/nightly/trunk
>
> #---------------------------------------------------------------------
> -
>
> [MPI get: MPICH2]
> mpi_details = MPICH2
>
> module = Download
> download_url = http://www-unix.mcs.anl.gov/mpi/mpich2/downloads/
> mpich2-1.0.5p4.tar.gz
>
> #---------------------------------------------------------------------
> -
>
> [MPI get: MVAPICH2]
> mpi_details = MVAPICH2
>
> module = Download
> download_url = http://mvapich.cse.ohio-state.edu/download/mvapich2/
> mvapich2-0.9.8p3.tar.gz
>
> #=====================================================================
> =
> # Install MPI phase
> #=====================================================================
> =
>
> # All flavors of Open MPI
> [MPI install: OMPI/GNU-standard]
> mpi_get = ompi-nightly-trunk
> save_stdout_on_success = 1
> merge_stdout_stderr = 0
>
> module = OMPI
> ompi_compiler_name = gnu
> ompi_compiler_version = &get_gcc_version()
> # Adjust these configure flags for your site
> ompi_configure_arguments = CFLAGS=-O3 --enable-mpirun-prefix-by-
> default --enable-branch-probabilities --disable-heterogeneous --
> without-mpi-param-check --enable-sparse-groups --with-openib=/usr/
> local/ofed
>
> #---------------------------------------------------------------------
> -
>
> [MPI install: MPICH2]
> mpi_get = mpich2
> save_stdout_on_success = 1
> merge_stdout_stderr = 0
> # Adjust this for your site (this is what works at Cisco). Needed to
> # launch in SLURM; adding this to LD_LIBRARY_PATH here propagates this
> # all the way through the test run phases that use this MPI install,
> # where the test executables will need to have this set.
> prepend_path = LD_LIBRARY_PATH /opt/SLURM/lib
>
> module = MPICH2
> mpich2_compiler_name = gnu
> mpich2_compiler_version = &get_gcc_version()
> mpich2_configure_arguments = --disable-f90 CFLAGS=-O3 --enable-fast
> --with-device=ch3:nemesis
> # These are needed to launch through SLURM; adjust as appropriate.
> mpich2_additional_wrapper_ldflags = -L/opt/SLURM/lib
> mpich2_additional_wrapper_libs = -lpmi
>
> #---------------------------------------------------------------------
> -
>
> [MPI install: MVAPICH2]
> mpi_get = mvapich2
> save_stdout_on_success = 1
> merge_stdout_stderr = 0
> # Adjust this for your site (this is what works at Cisco). Needed to
> # launch in SLURM; adding this to LD_LIBRARY_PATH here propagates this
> # all the way through the test run phases that use this MPI install,
> # where the test executables will need to have this set.
> prepend_path = LD_LIBRARY_PATH /opt/SLURM/lib
>
> module = MVAPICH2
> # Adjust this to be where your OFED is installed
> mvapich2_setenv = OPEN_IB_HOME /usr/local/ofed
> mvapich2_setenv = F77 gfortran
> mvapich2_setenv = LIBS -L/usr/local/ofed/lib64 -libverbs -lpthread
> mvapich2_build_script = make.mvapich2.ofa
> mvapich2_compiler_name = gnu
> mvapich2_compiler_version = &get_gcc_version()
> # These are needed to launch through SLURM; adjust as appropriate.
> mvapich2_additional_wrapper_ldflags = -L/opt/SLURM/lib
> mvapich2_additional_wrapper_libs = -lpmi
>
> #=====================================================================
> =
> # MPI run details
> #=====================================================================
> =
>
> [MPI Details: OMPI]
> # Check &test_alloc() for byslot or bynode
> exec = mpirun @alloc@ -np &test_np() @mca@ &test_executable()
> &test_argv()
> parameters = &MPI::OMPI::find_mpirun_params(&test_command_line(), \
> &test_executable())
> network = &MPI::OMPI::find_network(&test_command_line(),
> &test_executable())
>
> alloc = &if(&eq(&test_alloc(), "node"), "--bynode", "--byslot")
> mca = &enumerate( \
> "--mca btl sm,tcp,self ", \
> "--mca btl tcp,self ", \
> "--mca btl sm,openib,self ", \
> "--mca btl sm,openib,self --mca mpi_leave_pinned 1 ", \
> "--mca btl openib,self ", \
> "--mca btl openib,self --mca mpi_leave_pinned 1 ", \
> "--mca btl openib,self --mca mpi_leave_pinned_pipeline 1 ", \
> "--mca btl openib,self --mca btl_openib_use_srq 1 ")
>
> # It is important that the after_each_exec step is a single
> # command/line so that MTT will launch it directly (instead of via a
> # temporary script). This is because the "srun" command is
> # (intentionally) difficult to kill in some cases. See
> # https://svn.open-mpi.org/trac/mtt/changeset/657 for details.
>
> after_each_exec = &if(&ne("", &getenv("SLURM_NNODES")), "srun -N
> " . &getenv("SLURM_NNODES")) /home/mschaara/mtt-testing/
> after_each_exec.pl
>
> #---------------------------------------------------------------------
> -
>
> [MPI Details: MPICH2]
>
> # Launching through SLURM. If you use mpdboot instead, you need to
> # ensure that multiple mpd's on the same node don't conflict (or never
> # happen).
> exec = srun @alloc@ -n &test_np() &test_executable() &test_argv()
>
> # If not using SLURM, you'll need something like this (not tested).
> # You may need different hostfiles for running by slot/by node.
> #exec = mpiexec -np &test_np() --host &hostlist() &test_executable()
>
> network = loopback,ethernet,shmem
>
> # In this SLURM example, if each node has 4 CPUs, telling SLURM to
> # launching "by node" means specifying that each MPI process will
> use 4
> # CPUs.
> alloc = &if(&eq(&test_alloc(), "node"), "-c 2", "")
>
> #---------------------------------------------------------------------
> -
>
> [MPI Details: MVAPICH2]
>
> # Launching through SLURM. If you use mpdboot instead, you need to
> # ensure that multiple mpd's on the same node don't conflict (or never
> # happen).
> exec = srun @alloc@ -n &test_np() &test_executable() &test_argv()
>
> # If not using SLURM, you'll need something like this (not tested).
> # You may need different hostfiles for running by slot/by node.
> #exec = mpiexec -np &test_np() --host &hostlist() &test_executable()
>
> network = loopback,verbs,shmem
>
> # In this example, each node has 4 CPUs, so to launch "by node", just
> # specify that each MPI process will use 4 CPUs.
> alloc = &if(&eq(&test_alloc(), "node"), "-c 2", "")
>
> #=====================================================================
> =
> # Test get phase
> #=====================================================================
> =
>
> [Test get: netpipe]
> module = Download
> download_url = http://www.scl.ameslab.gov/netpipe/code/
> NetPIPE_3.6.2.tar.gz
>
> #---------------------------------------------------------------------
> -
>
> [Test get: osu]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/osu
>
> #---------------------------------------------------------------------
> -
>
> [Test get: imb]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/IMB_2.3
>
> #---------------------------------------------------------------------
> -
>
> [Test get: skampi]
> module = SVN
> svn_url = https://svn.open-mpi.org/svn/ompi-tests/trunk/skampi-5.0.1
>
> #=====================================================================
> =
> # Test build phase
> #=====================================================================
> =
>
> [Test build: netpipe]
> test_get = netpipe
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> make mpi
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: osu]
> test_get = osu
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> shell_build_command = <<EOT
> make osu_latency osu_bw osu_bibw
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: imb]
> test_get = imb
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
>
> module = Shell
> shell_build_command = <<EOT
> cd src
> make clean IMB-MPI1
> EOT
>
> #---------------------------------------------------------------------
> -
>
> [Test build: skampi]
> test_get = skampi
> save_stdout_on_success = 1
> merge_stdout_stderr = 1
> stderr_save_lines = 100
>
> module = Shell
> # Set EVERYONE_CAN_MPI_IO for HP MPI
> shell_build_command = <<EOT
> make CFLAGS="-O2 -DPRODUCE_SPARSE_OUTPUT -DEVERYONE_CAN_MPI_IO"
> EOT
>
> #=====================================================================
> =
> # Test Run phase
> #=====================================================================
> =
>
> [Test run: netpipe]
> test_build = netpipe
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> # Timeout hueristic: 10 minutes
> timeout = 10:00
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
> np = 2
> alloc = node
>
> specify_module = Simple
> analyze_module = NetPipe
>
> simple_pass:tests = NPmpi
>
> #---------------------------------------------------------------------
> -
>
> [Test run: osu]
> test_build = osu
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> # Timeout hueristic: 10 minutes
> timeout = 10:00
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
> np = 2
> alloc = node
>
> specify_module = Simple
> analyze_module = OSU
>
> simple_pass:tests = osu_bw osu_latency osu_bibw
>
> #---------------------------------------------------------------------
> -
>
> [Test run: imb]
> test_build = imb
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> # Timeout hueristic: 10 minutes
> timeout = 10:00
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
> np = &env_max_procs()
>
> argv = -npmin &test_np() &enumerate("PingPong", "PingPing",
> "Sendrecv", "Exchange", "Allreduce", "Reduce", "Reduce_scatter",
> "Allgather", "Allgatherv", "Alltoall", "Bcast", "Barrier")
>
> specify_module = Simple
> analyze_module = IMB
>
> simple_pass:tests = src/IMB-MPI1
>
> #---------------------------------------------------------------------
> -
>
> [Test run: skampi]
> test_build = skampi
> pass = &and(&cmd_wifexited(), &eq(&cmd_wexitstatus(), 0))
> # Timeout hueristic: 10 minutes
> timeout = 10:00
> save_stdout_on_pass = 1
> # Ensure to leave this value as "-1", or performance results could
> be lost!
> stdout_save_lines = -1
> merge_stdout_stderr = 1
> np = &env_max_procs()
>
> argv = -i &find("mtt_.+.ski", "input_files_bakeoff")
>
> specify_module = Simple
> analyze_module = SKaMPI
>
> simple_pass:tests = skampi
>
> #=====================================================================
> =
> # Reporter phase
> #=====================================================================
> =
>
> [Reporter: IU database]
> module = MTTDatabase
>
> mttdatabase_realm = OMPI
> mttdatabase_url = https://www.open-mpi.org/mtt/submit/
> # Change this to be the username and password for your submit user.
> # Get this from the OMPI MTT administrator.
> mttdatabase_username = uh
> mttdatabase_password = &stringify(&cat("/home/mschaara/mtt-testing/
> mtt-db-password.txt"))
> # Change this to be some short string identifying your cluster.
> mttdatabase_platform = shark
>
> mttdatabase_debug_filename = mttdb_debug_file_perf
> mttdatabase_keep_debug_files = 1
>
> #---------------------------------------------------------------------
> -
>
> # This is a backup reporter; it also writes results to a local text
> # file
>
> [Reporter: text file backup]
> module = TextFile
>
> textfile_filename = $phase-$section-$mpi_name-$mpi_version.txt
>
> textfile_summary_header = <<EOT
> Hostname: &shell("hostname")
> uname: &shell("uname -a")
> Username: &shell("who am i")
> EOT
>
> textfile_summary_footer =
> textfile_detail_header =
> textfile_detail_footer =
>
> textfile_textwrap = 78

-- 
Jeff Squyres
Cisco Systems