Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] gpudirect p2p (again)?
From: Rolf vandeVaart (rvandevaart_at_[hidden])
Date: 2012-07-09 14:07:26

Yes, this feature is in Open MPI 1.7. It is implemented in the "smcuda" btl. If you configure as outlined in the FAQ, then things should just work. The smcuda btl will be selected and P2P will be used between GPUs on the same node. This is only utilized on transfers of buffers that are larger than 4K in size.


>-----Original Message-----
>From: users-bounces_at_[hidden] [mailto:users-bounces_at_[hidden]]
>On Behalf Of Crni Gorac
>Sent: Monday, July 09, 2012 1:25 PM
>To: users_at_[hidden]
>Subject: [OMPI users] gpudirect p2p (again)?
>Trying to examine CUDA support in OpenMPI, using OpenMPI current feature
>series (v1.7). There was a question on this mailing list back in October 2011
>about OpenMPI being able to use P2P transfers in case when two MPI
>processed involved in the transfer operation happens to execute on the same
>machine, and the answer was that this feature is being implemented. So my
>question is - what is the current status here, is this feature supported now?
>users mailing list
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.