Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Devel mailing list

From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2007-08-24 14:38:28


Josh asked me to send some details on the "collectives performance
bakeoff" INI template file that its in the jms branch samples/ompi-
core-perf-testing.ini. Here's the scoop:

  * As usual, this is a template. There are values that you will
need to fill in (e.g., the MTT database username/password, the MPICH-
MX username/password, etc.), and values that you will need to tweak
for your site.

  * The easy part: the test get, build, and run sections are for the
following tests: netpipe, OSU, IMB, and SKaMPI. It's actually a far
smaller test set than we run for regression / correctness testing.
The SKaMPI tests that are there right now are preliminary; Jelena
will be making up a new set next week sometime. But testing with
what is there now is still most useful (to verify MTT's functionality).

  * I added support for many more MPI's to MTT; this is what has
consumed the majority of my time this week. Here's the MPI's that we
currently support:
    - Open MPI (of course)
    - MPICH1, MPICH2 (still waiting on word on a legal issue about a
patch for MPICH1 to run properly under SLURM)
    - MVAPICH1, MVAPICH2
    - Intel MPI
    - HP MPI (should be done this afternoon)

  * Other MPIs that will likely not be difficult to add (I do not
have access to do the work myself):
    - Scali MPI
    - Cray MPI
    - MPICH-MX
    - CT6 / CT7

  * The MPI get's should be trivial; they're all public (except for
MPICH-MX).

  * The MPI installs should all build the most optimal version of the
MPI possible (e.g., see OMPI and MPICH2's MPI Install sections).

  * Note that there's some "weird" stuff for MPICH2 and Intel MPI.
See the comments in the ini file for explanations.

  * If you're not using SLURM, you'll need before_any_exec /
after_all_exec sections like Intel MPI's MPI Details for MPICH2 and
MVAPICH2. Also note the setenv in Intel MPI's MPI Install section --
I don't know offhand if that'll work for vanilla MPICH2 or whether
that was something the Intel MPI team added to mpd.

Basically, we want to see if the organizations can take this template
and run with it to get performance results back to the MTT database
(even with just 2 MPI processes).

Let me know if you have any questions / problems.

-- 
Jeff Squyres
Cisco Systems