Open MPI logo

Open MPI Development Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Development mailing list

Subject: Re: [OMPI devel] RFC: Add support to send/receive CUDA device memory directly
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-04-14 14:33:36


On Apr 14, 2011, at 12:37 PM, Brice Goglin wrote:

> GPUDirect is only about using the same host buffer for DMA from/to both
> the NIC and the GPU. Without GPUDirect, you have a host buffer for the
> GPU and another one for IB (looks like some strange memory registration
> problem to me...), and you have to memcpy between them in the middle .
>
> We're all confused with the name "GPUDirect" because we remember people
> doing DMA directly between the NIC and a GPU or SCSI disk ten years ago.
> GPUDirect doesn't go that far unfortunately :/

Correct. GPUDirect is a brilliant marketing name. Its name has nothing to do with what it really is: the ability to register the same buffer to both CUDA and OpenFabrics.

As Brice says: GPUDirect does NOT send/receive data directly from the accelerator's memory.

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/