Subject: Re: [OMPI docs] FAQ: GPUDirect support for MPI_PACK/MPI_UNPACK
From: Jeff Squyres (jsquyres) (jsquyres_at_[hidden])
Date: 2014-10-03 10:48:21


Adding Rolf vandeVaart, OMPI's NVIDIA rep. He can help you out.

On Oct 3, 2014, at 10:42 AM, Carl Ponder <cponder_at_[hidden]> wrote:

> I have a GPUDirect test-case that appears to work correctly with OpenMPI 1.8.2 and CUDA 6.5.
> It uses MPI_BCAST plus MPI_PACK & MPI_UNPACK with GPU-located data, programmed in OpenACC with PGI 14.9.
> The OpenMPI FAQ entry
> http://www.open-mpi.org/faq/?category=all#mpi-cuda-support
> states that
> Here is the list of APIs that currently support sending and receiving CUDA device memory.
> MPI_Send, MPI_Bsend, MPI_Ssend, MPI_Rsend, MPI_Isend, MPI_Ibsend, MPI_Issend, MPI_Irsend, MPI_Send_init, MPI_Bsend_init, MPI_Ssend_init, MPI_Rsend_init, MPI_Recv, MPI_Irecv, MPI_Recv_init, MPI_Sendrecv, MPI_Bcast, MPI_Gather, MPI_Gatherv, MPI_Allgather, MPI_Allgatherv, MPI_Alltoall, MPI_Alltoallv, MPI_Scatter, MPI_Scatterv
>
> implying that the MPI_PACK & MPI_UNPACK aren't supported, but the FAQ also only references OpenMPI up to 1.7.4 and CUDA up to 6.0.
> Are these MPI_PACK & MPI_UNPACK supported with the OpenMPI 1.8.* releases?
> Thanks,
>
> Carl Ponder
> HPC Applications Performance
> NVIDIA Developer Technology
>
> This email message is for the sole use of the intended recipient(s) and may contain confidential information. Any unauthorized review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.
> _______________________________________________
> docs mailing list
> docs_at_[hidden]
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/docs
> Link to this post: http://www.open-mpi.org/community/lists/docs/2014/10/0211.php

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/