Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?
From: Rolf vandeVaart (rvandevaart_at_[hidden])
Date: 2011-12-14 10:54:13

To add to this, yes, we recommend that the CUDA context exists prior to a call to MPI_Init. That is because a CUDA context needs to exist prior to MPI_Init as the library attempts to register some internal buffers with the CUDA library that require a CUDA context exists already. Note that this is only relevant if you plan to send and receive CUDA device memory directly from MPI calls. There is a little more about this in the FAQ here.


From: Matthieu Brucher [mailto:matthieu.brucher_at_[hidden]]
Sent: Wednesday, December 14, 2011 10:47 AM
To: Open MPI Users
Cc: Rolf vandeVaart
Subject: Re: [OMPI users] How "CUDA Init prior to MPI_Init" co-exists with unique GPU for each MPI process?


Processes are not spawned by MPI_Init. They are spawned before by some applications between your mpirun call and when your program starts. When it does, you already have all MPI processes (you can check by adding a sleep or something like that), but they are not synchronized and do not know each other. This is what MPI_Init is used for.

Matthieu Brucher
2011/12/14 Dmitry N. Mikushin <maemarcus_at_[hidden]<mailto:maemarcus_at_[hidden]>>
Dear colleagues,

For GPU Winter School powered by Moscow State University cluster
"Lomonosov", the OpenMPI 1.7 was built to test and popularize CUDA
capabilities of MPI. There is one strange warning I cannot understand:
OpenMPI runtime suggests to initialize CUDA prior to MPI_Init. Sorry,
but how could it be? I thought processes are spawned during MPI_Init,
and such context will be created on the very first root process. Why
do we need existing CUDA context before MPI_Init? I think there was no
such error in previous versions.

- D.
users mailing list

Information System Engineer, Ph.D.
This email message is for the sole use of the intended recipient(s) and may contain
confidential information.  Any unauthorized review, use, disclosure or distribution
is prohibited.  If you are not the intended recipient, please contact the sender by
reply email and destroy all copies of the original message.