Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: Add support to send/receive CUDA device memory directly
From: Brice Goglin (Brice.Goglin_at_[hidden])
Date: 2011-04-14 12:37:57


Le 14/04/2011 17:58, George Bosilca a écrit :
> On Apr 13, 2011, at 20:07 , Ken Lloyd wrote:
>
>
>> George, Yes. GPUDirect eliminated an additional (host) memory buffering step between the HCA and the GPU that took CPU cycles.
>>
> If this is the case then why do we need to use special memcpy functions to copy the data back into the host memory prior to using the send/recv protocol? If GPUDirect remove the need for host buffering then as soon as the memory is identified as being on the device (using the Unified Virtual Addressing), the device can deliver it directly to the network card.
>

GPUDirect is only about using the same host buffer for DMA from/to both
the NIC and the GPU. Without GPUDirect, you have a host buffer for the
GPU and another one for IB (looks like some strange memory registration
problem to me...), and you have to memcpy between them in the middle .

We're all confused with the name "GPUDirect" because we remember people
doing DMA directly between the NIC and a GPU or SCSI disk ten years ago.
GPUDirect doesn't go that far unfortunately :/

Brice