I just want to make sure that I correctly understand your statement.
You're saying that running NetPIPE (NPtcp) directly over TCP give you
a latency of 12us, but running NetPIPE (NPmpi) over Open MPI bring
this latency up to 45us ?
On Jul 29, 2008, at 10:52 AM, Andy Georgi wrote:
> Zitat von Jeff Squyres <jsquyres_at_[hidden]>:
>> On Jul 28, 2008, at 2:53 PM, Andy Georgi wrote:
>>> we use Chelsio S320E-CXA adapters (http://www.chelsio.com/assetlibrary/products/S320E%20Product%20Brief%20080424.pdf
>>> in one of our clusters. After tuning the kernel i measured the ping
>>> pong latency via NetPIPE and got ~12us which is pretty good for
>>> TCP i
>>> think. So i wrote a simple ping-pong-kernel and was really terrified
>>> about the ~45us i got with OpenMPI 1.2.6. Are there any hints how we
>>> can reduce the MPI latency? To increase the bandwidth we already set
>>> the buffer sizes but we couldn't find a parameter which can be
>>> relevant for the latency. Every hint is welcome.
>> The upcoming Open MPI v1.3 series will support iWARP, which gives
>> better latency than that. I don't know all the Chelsio models
>> are those iWARP-capable cards?
> Thanks for the fast answer. So is this latency normal for TCP
> communications over MPI!? Could RDMA maybe reduce the latency? It
> should work with those cards but there are still problems with OFED.
> iWARP is also one of the features they offer but if it works...
> Dresden University of Technology
> Center for Information Services
> and High Performance Computing (ZIH)
> D-01062 Dresden
> e-mail: Andy.Georgi_at_[hidden]
> WWW: http://www.tu-dresden.de/zih
> users mailing list
- application/pkcs7-signature attachment: smime.p7s