Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI devel] MVAPICH2 vs Open-MPI
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2012-02-15 13:17:16

I think the short answer is: Rolf is currently working on better GP-GPU integration with Open MPI. :-)

On Feb 14, 2012, at 5:36 PM, Rolf vandeVaart wrote:

> There are several things going on here that make their library perform better.
> With respect to inter-node performance, both MVAPICH2 and Open MPI copy the GPU memory into host memory first. However, they are using special host buffers that and a code path that allows them to copy the data asynchronously and therefore do a better job pipelining than Open MPI. I believe their host buffers are bigger which works better at larger messages. Open MPI just piggy backs on the existing host buffers in the Open MPI openib BTL. Open MPI also just uses synchronous copies . (There is hope to improve that)
> Secondly, with respect to intra-node performance, they are using the Inter Process Communication feature of CUDA which means that within a node, one can move GPU memory directly from one GPU to another. We have an RFC from December to add this into Open MPI as well, but do not have approval yet. Hopefully sometime soon.
> Rolf
>> -----Original Message-----
>> From: devel-bounces_at_[hidden] [mailto:devel-bounces_at_[hidden]]
>> On Behalf Of Rayson Ho
>> Sent: Tuesday, February 14, 2012 4:16 PM.
>> To: Open MPI Developers
>> Subject: [OMPI devel] MVAPICH2 vs Open-MPI
>> See P. 38 - 40, MVAPICH2 outperforms Open-MPI for each test, so is it
>> something that they are doing to optimize for CUDA & GPUs and those
>> optimizations are not in OMPI, or did they specifically tune MVAPICH2 to
>> make it shine??
>> Workshop/Presentations/7_OSU.pdf
>> The benchmark package:
>> Rayson
>> =================================
>> Open Grid Scheduler / Grid Engine
>> Scalable Grid Engine Support Program
>> _______________________________________________
>> devel mailing list
>> devel_at_[hidden]
> -----------------------------------------------------------------------------------
> This email message is for the sole use of the intended recipient(s) and may contain
> confidential information. Any unauthorized review, use, disclosure or distribution
> is prohibited. If you are not the intended recipient, please contact the sender by
> reply email and destroy all copies of the original message.
> -----------------------------------------------------------------------------------
> _______________________________________________
> devel mailing list
> devel_at_[hidden]

Jeff Squyres
For corporate legal information go to: