Open MPI logo

MTT Devel Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all MTT Devel mailing list

From: Josh Hursey (jjhursey_at_[hidden])
Date: 2007-08-29 07:12:58

Sorry for the delay in reply.

On Aug 27, 2007, at 6:16 PM, Jeff Squyres wrote:

>>> - We need a well defined way to see what collective implementation
>>> was used. Meaning that there are N AlltoAll collective
>>> implementations in the 'tuned' component we need to know when
>>> looking
>>> at the graph which one of the N we are looking at for Open MPI. For
>>> other implementations we don't have so much control.
> I don't know if MTT can. In order for MTT to do this, OMPI needs to
> export that data somehow.

So I see 2 solutions to this:
  1) Require that everyone specify the collective on the command line.
  2) Create a special, hidden MCA parameter that prints the
collective being used only once. Then MTT can extract that and we can
track it.
Just an idea.

>>> - It is difficult to search in the reporter for queries like:
>>> ----------
>>> * Open MPI run with only tcp,sm,self
>> How about something like this?
> I did some skampi runs to see verbs results across 2 MPIs (Intel MPI
> uses udapl, not tcp). I don't really think that this is hard:
> - network: verbs (or TCP in Josh's case)
> - test suite: skampi
> - command: bcast (granted, per #281, you have to fill in "bcast" on
> the "command" field on the advanced window, not the normal window)
> It should show all the MPI's. You probably want to limit it down to
> a specific platform, though, in order to get apples-to-apples
> comparisons.
>>> * Intel MPI (which is only tcp I believe)
>>> * MPICH2 with tcp results from running the skampi Bcast benchmark.
>>> ----------
>>> The reporter is designed to track a single MPI well for regression
>>> tracking. However when we need to compare multiple MPIs and each may
>>> need to be selected with a different type of query it is impossible/
>>> hard to do.
> I don't see why this is hard...? I disagree with the statement
> "Reporter is design to track a single MPI well..." See the permalink
> above.

Humm I was having an aweful time getting these results to pair as you
did in the link you gave me (I actually gave up after a while). Maybe
I was using the reporter wrong.

>>> One solution I proposed was using the 'tagging' idea, but there
>>> might
>>> be some alternative UI features that we can develop to better
>>> support
>>> these types of queries. Tim P seemed interested/had some ideas on
>>> how
>>> to do this.