Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Automated tuning tool
From: Gus Correa (gus_at_[hidden])
Date: 2009-08-11 16:45:08


Thank you, John Casu and Edgar Gabriel for the pointers
to the parameter space sweep script and the OTPO code.

For simplicity,
I was thinking of testing each tuned collective separately,
instead of the applications, to have an idea
of which algorithms and parameters are best for our small cluster,
on a range of message and communicator sizes.

We have several applications, different problem sizes,
different number of processes, etc,
and all use a bunch of different collectives, besides point-to-point.

Gus Correa

john casu wrote:
> I'm not sure that there is a general "best set" of parameters, given the
> dependence of that set on comms patterns, etc...
>
> Still, this *is* a classic parameter sweep and optimization problem
> (unlike ATLAS), with a small number of parameters, and is the sort of
> thing one should be able to hook up fairly easily in a python script
> connected to a batch scheduler. Especially since you'd be likely to
> submit and run either a single job, or a number of equal sized jobs in
> parallel.
>
> In fact, here is a python script that works with SGE
> http://www.cs.umass.edu/~swarm/index.php?n=Sge.Py
>
> Now, you'd just have to choose the app, or apps that are important to you
>
>

Edgar Gabriel wrote:
> Gus Correa wrote:
>> Terry Frankcombe wrote:
>>> There's been quite some discussion here lately about the effect of OMPI
>>> tuning parameters, particularly w.r.t. collectives.
>>>
>>> Is there some tool to probe performance on any given network/collection
>>> of nodes to aid optimisation of these parameters?
>>>
>>> (I'm thinking something along the philosophy of ATLAS.)
>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> users_at_[hidden]
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>> Hi Terry
>>
>> We are also looking for this holy grail.
>>
>> So far I found this 2008 reference to a certain
>> "Open Tool for Parameter Optimization (OTPO)":
>>
>> http://www.springerlink.com/content/h5162153l184r7p0/
>>
>> OTPO defines itself as this:
>>
>> "OTPO systematically tests large numbers of combinations of Open MPI’s
>> run-time tunable parameters for common communication patterns and
>> performance metrics to determine the “best” set for a given platform."
>
> you can checkout the OTPO code at
>
> http://svn.open-mpi.org/svn/otpo/trunk/
>
> It supports as of now netpipe and skampi collectives for tuning. It is
> far from perfect, but it is a starting point. If there are any issues,
> please let us know...
>
> Thanks
> Edgar
>
>>
>> However, I couldn't find any reference to the actual code or scripts,
>> and whether it is available, tested, free, downloadable, etc.
>>
>> At this point I am doing these performance
>> tests in a laborious and inefficient manual way,
>> when I have the time to do it.
>>
>> As some of the aforementioned article authors
>> are list subscribers (and OpenMPI developers),
>> maybe they can shed some light about OTPO, tuned collective
>> optimization, OpenMPI runtime parameter optimization, etc.
>>
>> IMHO, this topic deserves at least a FAQ.
>>
>> Developers, Jeff: Any suggestions? :)
>>
>> Many thanks,
>> Gus Correa
>> ---------------------------------------------------------------------
>> Gustavo Correa
>> Lamont-Doherty Earth Observatory - Columbia University
>> Palisades, NY, 10964-8000 - USA
>> ---------------------------------------------------------------------
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>