Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] Open MPI and DAPL 2.0.34 are incompatible?
From: Beat Rubischon (beat_at_[hidden])
Date: 2011-12-23 05:21:11

Hi Paul!

On 22.12.11 19:43, Paul Kapinos wrote:
> Well. Any suggestions? Does OpenMPI ever able to use DAPL 2.0 on Linux?

I don't think so. Even Intel dropped the need for DAPL in their 4.x
release. It's an extra layer between the IB stack and the MPI which
basically adds additional complexity and latency. According to the
marketing numbers the 4.x releases using verbs are performing
significantly better then the 3.x releases using DAPL.

In my experience Intel's MPI (and IBM aka Platform aka HP-MPI) performs
often better then OpenMPI on top of Infiniband. Similar in flexibility
and with a wide range for optimizations to your specific code. Of course
it's expensive to use a commercial MPI and it's a blackbox. The main
reason why I usually use OpenMPI when preparing installations for my

My recommendation for you: Use OpenMPI with verbs to have a clean and
free MPI on your cluster with easy interfaces to your job scheduler. Or
buy a commercial MPI, invest a lot of manpower for a tight integration
and win an improved latency and/or throughput.


     \|/                           Beat Rubischon <beat_at_[hidden]>
   ( 0-0 )                   
Meine Erlebnisse, Gedanken und Traeume: