Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

From: Galen Shipman (gshipman_at_[hidden])
Date: 2006-11-30 10:18:03


Looking VERY briefly at the GAMMA API here:
http://www.disi.unige.it/project/gamma/gamma_api.html

It looks like one could create a GAMMA BTL with a minimal amount of
trouble.
I would encourage your group to do this!

There is quite a bit of information regarding the BTL interface, and
for GAMMA it looks like all you would need is the send/recv
interfaces to start. You could do trickier things with the RDMA put/
get interfaces in an attempt to minimize memory copies (we do this
with TCP) but this is not necessary for correctness. Anyway, here is
the current list of docs that explain our P2P layers:

Here is a paper on PML OB1, this is the upper layer above the BTL's
you wouldn't need to worry much about this but good to know what we
are doing...:
http://www.open-mpi.org/papers/euro-pvmmpi-2006-hpc-protocols

There is also some information in this paper this has information
about the PML BTL interactions, from an IB point of view:
http://www.open-mpi.org/papers/ipdps-2006

For a very detailed presentation on OB1 go here, this is probably the
most relevant:
http://www.open-mpi.org/papers/workshop-2006/wed_01_pt2pt.pdf

Thanks,

Galen

On Oct 23, 2006, at 4:05 PM, Lisandro Dalcin wrote:

> On 10/23/06, Tony Ladd <ladd_at_[hidden]> wrote:
>> A couple of comments regarding issues raised by this thread.
>>
>> 1) In my opinion Netpipe is not such a great network benchmarking
>> tool for
>> HPC applications. It measures timings based on the completion of
>> the send
>> call on the transmitter not the completion of the receive. Thus,
>> if there is
>> a delay in copying the send buffer across the net, it will report a
>> misleading timing compared with the wall-clock time. This is
>> particularly
>> problematic with multiple pairs of edge exchanges, which can
>> oversubscribe
>> most GigE switches. Here the netpipe timings can be off by orders of
>> magnitude compared with the wall clock. The good thing about
>> writing your
>> own code is that you know what it has done (of course no one else
>> knows,
>> which can be a problem). But it seems many people are unaware of
>> the timing
>> issue in Netpipe.
>
> Yes! I've noticed that. I am now using Intel MPI Benchmarck. PingPong
> /PingPing and SendRecv test cases seems to be more realistic. Does any
> one have any comments about this test suite?
>
>
>> 2) Its worth distinguishing between ethernet and TCP/IP. With
>> MPIGAMMA, the
>> Intel Pro 1000 NIC has a latency of 12 microsecs including the
>> switch and a
>> duplex bandwidth of 220 MBytes/sec. With the Extreme Networks
>> X450a-48t
>> switch we can sustain 220MBytes/sec over 48 ports at once. This is
>> not IB
>> performance but it seems sufficient to scale a number of
>> applications to the
>> 100 cpu level, and perhaps beyond.
>>
>
> GAMMA seems to be a great work, looking at some of its reports in web
> site. Hoever, I have not tried it yet, and I am not sure if I will,
> mainly because only supports MPICH-1. Has anyone any rough idea how
> much work it could be to make it availabe for OpenMPI. Seems to be a
> very interesting student project...
>
> --
> Lisandro Dalcín
> ---------------
> Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
> Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
> Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
> PTLC - Güemes 3450, (3000) Santa Fe, Argentina
> Tel/Fax: +54-(0)342-451.1594
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users