Subject: [OMPI docs] FAQ: GPUDirect support for MPI_PACK/MPI_UNPACK
From: Carl Ponder (cponder_at_[hidden])
Date: 2014-10-03 10:42:28


I have a GPUDirect test-case that appears to work correctly with OpenMPI
1.8.2 and CUDA 6.5.
It uses MPI_BCAST plus MPI_PACK & MPI_UNPACK with GPU-located data,
programmed in OpenACC with PGI 14.9.
The OpenMPI FAQ entry

    http://www.open-mpi.org/faq/?category=all#mpi-cuda-support

states that

    Here is the list of APIs that currently support sending and
    receiving CUDA device memory.

    /MPI_Send, MPI_Bsend, MPI_Ssend, MPI_Rsend, MPI_Isend, MPI_Ibsend,
    MPI_Issend, MPI_Irsend, MPI_Send_init, MPI_Bsend_init,
    MPI_Ssend_init, MPI_Rsend_init, MPI_Recv, MPI_Irecv, MPI_Recv_init,
    MPI_Sendrecv, MPI_Bcast, MPI_Gather, MPI_Gatherv, MPI_Allgather,
    MPI_Allgatherv, MPI_Alltoall, MPI_Alltoallv, MPI_Scatter, MPI_Scatterv/

implying that the MPI_PACK & MPI_UNPACK aren't supported, but the FAQ
also only references OpenMPI up to 1.7.4 and CUDA up to 6.0.
Are these MPI_PACK & MPI_UNPACK supported with the OpenMPI 1.8.* releases?
Thanks,

                 Carl Ponder
                 HPC Applications Performance
                 NVIDIA Developer Technology

-----------------------------------------------------------------------------------
This email message is for the sole use of the intended recipient(s) and may contain
confidential information. Any unauthorized review, use, disclosure or distribution
is prohibited. If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.
-----------------------------------------------------------------------------------