From: Josh Hursey (jjhursey_at_[hidden])
Date: 2006-11-06 09:59:02


On Nov 6, 2006, at 7:45 AM, Jeff Squyres wrote:

> On Nov 3, 2006, at 2:25 PM, Josh Hursey wrote:
>
>> I have an INI File that looks something like what is enclose at the
>> end of this message.
>>
>> So I have multiple MPI Details sections. It seems like only the first
>> one is running. Do I have to list them out somewhere?
>
> Yes. In the MPI Get section, you have to list an "mpi_details" field
> that says which MPI Details section applies to this MPI. We don't
> currently allow a comma-delimited list of Details section names, but
> that could be added if you want/need.
>
>> As a side question:
>> Instead of using multiple MPI Details sections, if I use a single MPI
>> Details section and told MTT to iterate over the parameters to mpirun
>> at what point in the aggregation chain does this happen?
>> Meaning will I see something like:
>
> We hadn't really anticipated having multiple MPI details sections
> that were relevant for a single MPI -- we had intended the MPI
> details section to provide the generic method of how to run with that
> MPI (e.g., mpirun command line differences between different MPI
> implementations). But that could be changed if it would be helpful.
>
> Ethan made a relevant comment here as well.

So if I were to want to do this <in a kludgy way> I would have to
have something like:
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
[MPI Get: trunk BTL_V1]
MPI_Details= Open MPI BTL V1
[MPI Get: trunk BTL_V2]
MPI_Details= Open MPI BTL V2

[MPI Details: Open MPI BTL V1]
mpirun -mca btl tcp,self
[MPI Details: Open MPI BTL V2]
mpirun -mca btl mx,self
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm

Similar to what we would have to do in order to list a run with Open
MPI and LAM/MPI in the same INI file:
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
[MPI Get: trunk OMPI]
MPI_Details= Open MPI
[MPI Get: trunk LAM]
MPI_Details= LAM/MPI

[MPI Details: Open MPI]
mpirun -mca btl tcp,self
[MPI Details: LAM/MPI]
mpirun -ssi rpi tcp
mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm

>
>> Building trunk
>> Running trivial
>> Runing mpirun -mca btl tcp,self
>> Runing mpirun -mca btl mx,self
>> Runing mpirun -mca btl mvapi,self
>> Running intel
>> Runing mpirun -mca btl tcp,self
>> Runing mpirun -mca btl mx,self
>> Runing mpirun -mca btl mvapi,self
>
> You will see this ^^^. The way it works is that the Test Run phase
> will iterate through the successful Test Builds (which depends on the
> successful Test Gets and MPI Installs, which, in turn, depends on
> successful MPI Gets). When it identifies a test to run, it looks up
> the corresponding MPI Install section, gets the MPI Details section,
> and creates one or more command lines to execute. So a single Test
> Run corresponds to a single Test Build / MPI Install / MPI Get tuple,
> and therefore results in a single functlet for the command line to
> execute.

Cool. Just wanted to know what to expect from MTT.

>
>> or will I see
>>
>> Building trunk
>> Runing mpirun -mca btl tcp,self
>> Running trivial
>> Running intel
>> Runing mpirun -mca btl mx,self
>> Running trivial
>> Running intel
>> Runing mpirun -mca btl mvapi,self
>> Running trivial
>> Running intel
>
> I'm not quite sure I understand this above example ordering -- I see
> the "mpirun" lines before the "Running <testname>" lines. Did you
> mean:
>
> Building trunk
> Running trivial
> Runing mpirun -mca btl tcp,self
> Running intel
> Runing mpirun -mca btl tcp,self
> Running trivial
> Runing mpirun -mca btl mx,self
> Running intel
> Runing mpirun -mca btl mx,self
> Running trivial
> Runing mpirun -mca btl mvapi,self
> Running intel
> Runing mpirun -mca btl mvapi,self

Yea that's what I was thinking.

>
>> I would actually prefer the last because if something is broken with
>> the IB interconnect it doesn't slow down the other tests from
>> reporting. E.g. I can prioritize a bit in the INI file.
>
> I could see that. Unfortunately, that's not currently how the run
> engine is structured.

Thanks :)

We'll find a way to make it work, though that maybe a good task for
some one else to experiment with at IU.

-- Josh

>
>> Cheers,
>> Josh
>>
>> mmmmmmmmmmmmmmmmmmmmmmmmmmmmmm
>> #====================================================================
>> =
>> =
>> # Overall configuration
>> #====================================================================
>> =
>> =
>>
>> [MTT]
>>
>> hostfile =
>> hostlist =
>> max_np =
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # MPI get phase
>> #====================================================================
>> =
>> =
>>
>> [MPI get: ompi-nightly-trunk]
>> mpi_details = Open MPI
>>
>> module = OMPI_Snapshot
>> ompi_snapshot_url = http://www.open-mpi.org/nightly/trunk
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # Install MPI phase
>> #====================================================================
>> =
>> =
>>
>> [MPI install: odin 32 bit gcc]
>> mpi_get = ompi-nightly-trunk
>> save_stdout_on_success = 1
>> merge_stdout_stderr = 1
>> vpath_mode = none
>>
>> make_all_arguments = -j 6
>> make_check = 1
>>
>> compiler_name = gnu
>> compiler_version = &shell("gcc --version | head -n 1 | awk '{ print \
>> $3 }'")
>> configure_arguments = \
>> FCFLAGS=-m32 FFLAGS=-m32 CFLAGS=-m32 CXXFLAGS=-m32 \
>> --with-wrapper-cflags=-m32 --with-wrapper-cxxflags=-m32 --with-
>> wrapper-fflags=-m32 --with-wrapper-fcflags=-m32
>>
>> module = OMPI
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # MPI run details
>> #====================================================================
>> =
>> =
>>
>> [MPI Details: Open MPI tcp sm]
>> exec = mpirun @hosts@ -mca btl tcp,sm,self -np &test_np() --prefix
>> &test_prefix() &test_executable() &test_argv()
>>
>> # Yes, all these quotes are necessary. Don't mess with them!
>> hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile
>> ()")", \
>> "&if(&have_hostlist(), "&join("--host ", "&hostlist
>> ()")", "")")
>>
>> after_each_exec = <<EOT
>> if test "$MTT_TEST_HOSTFILE" != ""; then
>> args="--hostfile $MTT_TEST_HOSTFILE"
>> elif test "$MTT_TEST_HOSTLIST" != ""; then
>> args="--host $MTT_TEST_HOSTLIST"
>> fi
>> orterun $args -np $MTT_TEST_NP --prefix $MTT_TEST_PREFIX
>> mtt_ompi_cleanup.pl
>> EOT
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> [MPI Details: Open MPI mx sm]
>> exec = mpirun @hosts@ -mca btl mx,sm,self -np &test_np() --prefix
>> &test_prefix() &test_executable() &test_argv()
>>
>> # Yes, all these quotes are necessary. Don't mess with them!
>> hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile
>> ()")", \
>> "&if(&have_hostlist(), "&join("--host ", "&hostlist
>> ()")", "")")
>>
>> after_each_exec = <<EOT
>> if test "$MTT_TEST_HOSTFILE" != ""; then
>> args="--hostfile $MTT_TEST_HOSTFILE"
>> elif test "$MTT_TEST_HOSTLIST" != ""; then
>> args="--host $MTT_TEST_HOSTLIST"
>> fi
>> orterun $args -np $MTT_TEST_NP --prefix $MTT_TEST_PREFIX
>> mtt_ompi_cleanup.pl
>> EOT
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> [MPI Details: Open MPI mvapi sm]
>> exec = mpirun @hosts@ -mca btl mvapi,sm,self -np &test_np() --prefix
>> &test_prefix() &test_executable() &test_argv()
>>
>> # Yes, all these quotes are necessary. Don't mess with them!
>> hosts = &if(&have_hostfile(), "&join("--hostfile ", "&hostfile
>> ()")", \
>> "&if(&have_hostlist(), "&join("--host ", "&hostlist
>> ()")", "")")
>>
>> after_each_exec = <<EOT
>> if test "$MTT_TEST_HOSTFILE" != ""; then
>> args="--hostfile $MTT_TEST_HOSTFILE"
>> elif test "$MTT_TEST_HOSTLIST" != ""; then
>> args="--host $MTT_TEST_HOSTLIST"
>> fi
>> orterun $args -np $MTT_TEST_NP --prefix $MTT_TEST_PREFIX
>> mtt_ompi_cleanup.pl
>> EOT
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # Test get phase
>> #====================================================================
>> =
>> =
>>
>> [Test get: trivial]
>> module = Trivial
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # Test build phase
>> #====================================================================
>> =
>> =
>>
>> [Test build: trivial]
>> test_get = trivial
>> save_stdout_on_success = 1
>> merge_stdout_stderr = 1
>> stderr_save_lines = -1
>>
>> module = Trivial
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # Test Run phase
>> #====================================================================
>> =
>> =
>>
>> [Test run: trivial]
>> test_build = trivial
>> pass = &eq(&test_exit_status(), 0)
>> timeout = &multiply(2, test_np())
>> save_stdout_on_pass = 1
>> merge_stdout_stderr = 1
>> stdout_save_lines = 100
>> np = &env_max_procs()
>>
>> module = Simple
>> simple_only:tests = &find_executables(".")
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> #====================================================================
>> =
>> =
>> # Reporter phase
>> #====================================================================
>> =
>> =
>>
>> [Reporter: IU database]
>> module = MTTDatabase
>>
>> mttdatabase_realm = OMPI
>> mttdatabase_url = https://www.open-mpi.org/mtt/submit/
>> # OMPI Core: Change this to be the username and password for your
>> # submit user. Get this from the OMPI MTT administrator.
>> mttdatabase_username = XX
>> mttdatabase_password = XX
>> # OMPI Core: Change this to be some short string identifying your
>> # cluster.
>> mttdatabase_platform = IU - Thor - TESTING
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> # This is a backup for while debugging MTT; it also writes results to
>> # a local text file
>> [Reporter: text file backup]
>> module = TextFile
>>
>> textfile_filename = mtt-debug-report-2-$phase-$section-$mpi_name-
>> $mpi_version.txt
>> textfile_separator =
>>>>>> ----------------------------------------------------------<<<<
>>
>>
>> #--------------------------------------------------------------------
>> -
>> -
>>
>> _______________________________________________
>> mtt-users mailing list
>> mtt-users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users
>
>
> --
> Jeff Squyres
> Server Virtualization Business Unit
> Cisco Systems
>
> _______________________________________________
> mtt-users mailing list
> mtt-users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/mtt-users