Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] CPU burning in Wait state
From: George Bosilca (bosilca_at_[hidden])
Date: 2008-09-03 13:22:46


On Sep 3, 2008, at 6:11 PM, Vincent Rotival wrote:

> Eugene,
>
> No what I'd like is that when doing something like
>
> call mpi_bcast(data, 1, MPI_INTEGER, 0, .....)
>
> the program continues AFTER the Bcast is completed (so no control
> returned to user), but while threads with rank > 0 are waiting in
> Bcast they are not taking CPU resources

Threads with rank > 0 ? Now, this scares me !!! If all your threads
are going in the bcast, then I guess the application is not correct
from the MPI standard perspective (i.e. on each communicator there is
only one collective at every moment). In MPI, each process (and not
each thread) has a rank, and each process exists in each communicator
only once. In other words, as each collective is bounded to a specific
communicator, on each of your processes, only one thread should go in
the MPI_Bcast, if you want only ONE collective.

   george.

>
>
> I hope it is more clear, I apologize for not being clear in the
> first place
>
> Vincent
>
>
>
> Eugene Loh wrote:
>>
>> Vincent Rotival wrote:
>>
>>> The solution I retained was for the main thread to isend data
>>> separately to each other threads that are using Irecv + loop on
>>> mpi_test to test the finish of the Irecv. It mught be dirty but
>>> works much better than using Bcast
>>
>> Thanks for the clarification.
>>
>> But this strikes me more as a question about the MPI standard than
>> about the Open MPI implementation. That is, what you really want
>> is for the MPI API to support a non-blocking form of collectives.
>> You want control to return to the user program before the barrier/
>> bcast/etc. operation has completed. That's an API change.
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>>
>
> _______________________________________________
> users mailing list
> users_at_[hidden]
> http://www.open-mpi.org/mailman/listinfo.cgi/users



  • application/pkcs7-signature attachment: smime.p7s