Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

From: Tim S. Woodall (twoodall_at_[hidden])
Date: 2005-11-29 12:36:26


Can you try out the changes I just commited on the trunk? We were doing
more select/recvs then necessary.


George Bosilca wrote:
> I run Netpipe on 4 different clusters with differents OSes and Eternet
> devices. The results is that nearly the same behaviour happens all the
> time for small messages. Basically, our latency is really bad. Attached
> are 2 of the graphs on one MAC OS X cluster (wotan) and a Linux 2.6.10
> 32 bits one. The graph are for Netpipe compiled over tcp, and for Open
> MPI with all the PMLs (uniq, teg and ob1).Here is the global trend:
> - we are always slower than native TCP (what a guess!)
> - uniq is faster than teg by a fraction of second (it's more visible on
> fast networks).
> - teg and uniq are always better than ob1 in terms of latency.
> - the behaviour of ob1 differ on wotan and boba. On boba the
> performances are a lot closer to the other PML when on wotan it's like
> 40 micro-second slower (it nearly double the raw TCP latency).
> On the same nodes I run other Netpipe with SM and MX and the results are
> pretty good. So I think we have this latency problem only on TCP. I will
> take a look to see how exactly is happens but any help is welcome.
> george.
> "We must accept finite disappointment, but we must never lose infinite
> hope."
> Martin Luther King
> ------------------------------------------------------------------------
> _______________________________________________
> devel mailing list
> devel_at_[hidden]