Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] OMPI vs Scali performance comparisons
From: George Bosilca (bosilca_at_[hidden])
Date: 2009-03-17 19:21:13


The default values for the large message fragments are not optimized
for the new generation processors. This might be something to
investigate, in order to see if we can have the same bandwidth as they
do or not.

   george.

On Mar 17, 2009, at 18:23 , Eugene Loh wrote:

> A colleague of mine ran some microkernels on an 8-way Barcelona box
> (Sun x2200M2 at 2.3 GHz). Here are some performance comparisons
> with Scali. The performance tests are modified versions of the HPCC
> pingpong tests. The OMPI version is the trunk with my "single-
> queue" fixes... otherwise, OMPI latency at higher np would be
> noticeably worse.
>
> latency(ns) bandwidth(MB/s)
> (8-byte msgs) (2M-byte msgs)
> ============= =============
> np Scali OMPI Scali OMPI
>
> 2 327 661 1458 1295
> 4 369 670 1517 1287
> 8 414 758 1535 1294
>
> OMPI latency is nearly 2x slower than Scali's. Presumably,
> "fastpath" PML latency optimizations would help us a lot here.
> Thankfully, our latency is flat with np with the recent "single-
> queue" fixes... otherwise our high-np latency story would be so much
> worse. We're behind on bandwidth as well, though not as pitifully so.
> _______________________________________________
> devel mailing list
> devel_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/devel