Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] GPUDirect v1 issues
From: Kenneth A. Lloyd (kenneth.lloyd_at_[hidden])
Date: 2012-01-17 10:16:32

Also, which version of MVAPICH2 did you use?

I've been pouring over Rolf's OpenMPI CUDA RDMA 3 (using CUDA 4.1 r2) vis
MVAPICH-GPU on a small 3 node cluster. These are wickedly interesting.

-----Original Message-----
From: devel-bounces_at_[hidden] [mailto:devel-bounces_at_[hidden]] On
Behalf Of Rolf vandeVaart
Sent: Tuesday, January 17, 2012 7:54 AM
To: Open MPI Developers
Subject: Re: [OMPI devel] GPUDirect v1 issues

I am not aware of any issues. Can you send me a test program and I can try
it out?
Which version of CUDA are you using?


>-----Original Message-----
>From: devel-bounces_at_[hidden] [mailto:devel-bounces_at_[hidden]]
>On Behalf Of Sebastian Rinke
>Sent: Tuesday, January 17, 2012 8:50 AM
>To: Open MPI Developers
>Subject: [OMPI devel] GPUDirect v1 issues
>Dear all,
>I'm using GPUDirect v1 with Open MPI 1.4.3 and experience blocking
>MPI_SEND/RECV to block forever.
>For two subsequent MPI_RECV, it hangs if the recv buffer pointer of the
>second recv points to somewhere, i.e. not at the beginning, in the recv
>buffer (previously allocated with cudaMallocHost()).
>I tried the same with MVAPICH2 and did not see the problem.
>Does anybody know about issues with GPUDirect v1 using Open MPI?
>Thanks for your help,
>devel mailing list
This email message is for the sole use of the intended recipient(s) and may
contain confidential information. Any unauthorized review, use, disclosure
or distribution is prohibited. If you are not the intended recipient,
please contact the sender by reply email and destroy all copies of the
original message.

devel mailing list

No virus found in this message.
Checked by AVG -
Version: 2012.0.1901 / Virus Database: 2109/4747 - Release Date: 01/16/12