Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

From: Lisandro Dalcin (dalcinl_at_[hidden])
Date: 2006-10-23 18:05:17


On 10/23/06, Tony Ladd <ladd_at_[hidden]> wrote:
> A couple of comments regarding issues raised by this thread.
>
> 1) In my opinion Netpipe is not such a great network benchmarking tool for
> HPC applications. It measures timings based on the completion of the send
> call on the transmitter not the completion of the receive. Thus, if there is
> a delay in copying the send buffer across the net, it will report a
> misleading timing compared with the wall-clock time. This is particularly
> problematic with multiple pairs of edge exchanges, which can oversubscribe
> most GigE switches. Here the netpipe timings can be off by orders of
> magnitude compared with the wall clock. The good thing about writing your
> own code is that you know what it has done (of course no one else knows,
> which can be a problem). But it seems many people are unaware of the timing
> issue in Netpipe.

Yes! I've noticed that. I am now using Intel MPI Benchmarck. PingPong
/PingPing and SendRecv test cases seems to be more realistic. Does any
one have any comments about this test suite?

> 2) Its worth distinguishing between ethernet and TCP/IP. With MPIGAMMA, the
> Intel Pro 1000 NIC has a latency of 12 microsecs including the switch and a
> duplex bandwidth of 220 MBytes/sec. With the Extreme Networks X450a-48t
> switch we can sustain 220MBytes/sec over 48 ports at once. This is not IB
> performance but it seems sufficient to scale a number of applications to the
> 100 cpu level, and perhaps beyond.
>

GAMMA seems to be a great work, looking at some of its reports in web
site. Hoever, I have not tried it yet, and I am not sure if I will,
mainly because only supports MPICH-1. Has anyone any rough idea how
much work it could be to make it availabe for OpenMPI. Seems to be a
very interesting student project...

-- 
Lisandro Dalcín
---------------
Centro Internacional de Métodos Computacionales en Ingeniería (CIMEC)
Instituto de Desarrollo Tecnológico para la Industria Química (INTEC)
Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
PTLC - Güemes 3450, (3000) Santa Fe, Argentina
Tel/Fax: +54-(0)342-451.1594