This web mail archive is frozen.
This page is part of a frozen web archive of this mailing list.
You can still navigate around this archive, but know that no new mails
have been added to it since July of 2016.
Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.
On Sep 3, 2008, at 6:11 PM, Vincent Rotival wrote:
> No what I'd like is that when doing something like
> call mpi_bcast(data, 1, MPI_INTEGER, 0, .....)
> the program continues AFTER the Bcast is completed (so no control
> returned to user), but while threads with rank > 0 are waiting in
> Bcast they are not taking CPU resources
Threads with rank > 0 ? Now, this scares me !!! If all your threads
are going in the bcast, then I guess the application is not correct
from the MPI standard perspective (i.e. on each communicator there is
only one collective at every moment). In MPI, each process (and not
each thread) has a rank, and each process exists in each communicator
only once. In other words, as each collective is bounded to a specific
communicator, on each of your processes, only one thread should go in
the MPI_Bcast, if you want only ONE collective.
> I hope it is more clear, I apologize for not being clear in the
> first place
> Eugene Loh wrote:
>> Vincent Rotival wrote:
>>> The solution I retained was for the main thread to isend data
>>> separately to each other threads that are using Irecv + loop on
>>> mpi_test to test the finish of the Irecv. It mught be dirty but
>>> works much better than using Bcast
>> Thanks for the clarification.
>> But this strikes me more as a question about the MPI standard than
>> about the Open MPI implementation. That is, what you really want
>> is for the MPI API to support a non-blocking form of collectives.
>> You want control to return to the user program before the barrier/
>> bcast/etc. operation has completed. That's an API change.
>> users mailing list
> users mailing list
- application/pkcs7-signature attachment: smime.p7s