Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] Support for CUDA and GPU-direct with OpenMPI 1.6.5 an 1.7.2
From: Michael Thomadakis (drmichaelt7777_at_[hidden])
Date: 2013-07-08 13:33:48


Thanks Tom, that sounds good. I will give it a try as soon as our Phi host
here host gets installed.

I assume that all the prerequisite libs and bins on the Phi side are
available when we download the Phi s/w stack from Intel's site, right ?

Cheers
Michael

On Mon, Jul 8, 2013 at 12:10 PM, Elken, Tom <tom.elken_at_[hidden]> wrote:

> Do you guys have any plan to support Intel Phi in the future? That is,
> running MPI code on the Phi cards or across the multicore and Phi, as Intel
> MPI does?****
>
> *[Tom] *
>
> Hi Michael,****
>
> Because a Xeon Phi card acts a lot like a Linux host with an x86
> architecture, you can build your own Open MPI libraries to serve this
> purpose.
>
> Our team has used existing (an older 1.4.3 version of) Open MPI source to
> build an Open MPI for running MPI code on Intel Xeon Phi cards over Intel’s
> (formerly QLogic’s) True Scale InfiniBand fabric, and it works quite well.
> We have not released a pre-built Open MPI as part of any Intel software
> release. But I think if you have a compiler for Xeon Phi (Intel Compiler
> or GCC) and an interconnect for it, you should be able to build an Open MPI
> that works on Xeon Phi. ****
>
> Cheers,
> Tom Elken****
>
> thanks...****
>
> Michael****
>
> ** **
>
> On Sat, Jul 6, 2013 at 2:36 PM, Ralph Castain <rhc_at_[hidden]> wrote:***
> *
>
> Rolf will have to answer the question on level of support. The CUDA code
> is not in the 1.6 series as it was developed after that series went
> "stable". It is in the 1.7 series, although the level of support will
> likely be incrementally increasing as that "feature" series continues to
> evolve.****
>
>
>
> On Jul 6, 2013, at 12:06 PM, Michael Thomadakis <drmichaelt7777_at_[hidden]>
> wrote:
>
> > Hello OpenMPI,
> >
> > I am wondering what level of support is there for CUDA and GPUdirect on
> OpenMPI 1.6.5 and 1.7.2.
> >
> > I saw the ./configure --with-cuda=CUDA_DIR option in the FAQ. However,
> it seems that with configure v1.6.5 it was ignored.
> >
> > Can you identify GPU memory and send messages from it directly without
> copying to host memory first?
> >
> >
> > Or in general, what level of CUDA support is there on 1.6.5 and 1.7.2 ?
> Do you support SDK 5.0 and above?
> >
> > Cheers ...
> > Michael****
>
> > _______________________________________________
> > users mailing list
> > users_at_[hidden]
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users****
>
> ** **
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>