There was discussion of this on a prior email thread on the OMPI devel mailing list:

http://www.open-mpi.org/community/lists/devel/2013/05/12354.php


On Jul 6, 2013, at 2:01 PM, Michael Thomadakis <drmichaelt7777@gmail.com> wrote:

thanks,

Do you guys have any plan to support Intel Phi in the future? That is, running MPI code on the Phi cards or across the multicore and Phi, as Intel MPI does?

thanks...
Michael


On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain <rhc@open-mpi.org> wrote:
Rolf will have to answer the question on level of support. The CUDA code is not in the 1.6 series as it was developed after that series went "stable". It is in the 1.7 series, although the level of support will likely be incrementally increasing as that "feature" series continues to evolve.


On Jul 6, 2013, at 12:06 PM, Michael Thomadakis <drmichaelt7777@gmail.com> wrote:

> Hello OpenMPI,
>
> I am wondering what level of support is there for CUDA and GPUdirect on OpenMPI 1.6.5 and 1.7.2.
>
> I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However, it seems that with configure v1.6.5 it was ignored.
>
> Can you identify GPU memory and send messages from it directly without copying to host memory first?
>
>
> Or in general, what level of CUDA support is there on 1.6.5 and 1.7.2 ? Do you support SDK 5.0 and above?
>
> Cheers ...
> Michael
> _______________________________________________
> users mailing list
> users@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

_______________________________________________
users mailing list
users@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users