Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |  

This web mail archive is frozen.

This page is part of a frozen web archive of this mailing list.

You can still navigate around this archive, but know that no new mails have been added to it since July of 2016.

Click here to be taken to the new web archives of this list; it includes all the mails that are in this frozen archive plus all new mails that have been sent to the list since it was migrated to the new archives.

Subject: Re: [OMPI users] MPI_Finalize() maintains load at 100%.
From: Özgür Pekçağlıyan (ozgur.pekcagliyan_at_[hidden])
Date: 2014-05-23 09:52:24


Sorry, I assumed you were working with a group of machines (Different
computer with their own resources, connected through network). I am not
sure, if this would work in your situation. But you can still give it a
try, if you keep process 0 in waiting for receiving data, It may consume
less cpu time but again I am not sure of this.

On Fri, May 23, 2014 at 4:49 PM, Özgür Pekçağlıyan <
ozgur.pekcagliyan_at_[hidden]> wrote:

> In my codes, I am using MPI_Send and MPI_Recv functions to notify P0 that
> every other process have finished their own calculations. Maybe you cal
> also use the same method and keep P0 in waiting until it receives some data
> from other processes?
>
>
> On Fri, May 23, 2014 at 4:39 PM, Ralph Castain <rhc_at_[hidden]> wrote:
>
>> Hmmm...that is a bit of a problem. I've added a note to see if we can
>> turn down the aggressiveness of the MPI layer once we hit finalize, but
>> that won't solve your immediate problem.
>>
>> Our usual suggestion is that you have each proc call finalize before
>> going on to do other things. This avoids the situation you are describing -
>> after all, if the MPI phase is done, there really isn't any reason to not
>> call MPI_Finalize before moving on to other things. You don't have to delay
>> the call until the end of the program.
>>
>> Ralph
>>
>> On May 23, 2014, at 1:45 AM, Iván Cores González <ivan.coresg_at_[hidden]>
>> wrote:
>>
>> > Hi all,
>> > I have a performance problem with the next code.
>> >
>> > int main( int argc, char *argv[] )
>> > {
>> > MPI_Init(&argc, &argv);
>> >
>> > int myid;
>> > MPI_Comm_rank(MPI_COMM_WORLD, &myid);
>> >
>> > //Imagine some important job here, but P0 ends first.
>> > if (myid != 0)
>> > {
>> > sleep(20);
>> > }
>> >
>> > printf("Calling MPI_Finalize() ...\n");
>> > // Process 0 maintain core load at 100%.
>> > MPI_Finalize();
>> > printf("Ok\n");
>> >
>> > return 0;
>> > }
>> >
>> > If some MPI threads call MPI_Finalize() while others threads are still
>> > "working", the MPI_Finalize() function maintains the core load in 100%
>> > and not allows other threads or jobs in the processor to run faster.
>> >
>> > Any idea to avoid the load or force the P0 to sleep?
>> >
>> > Thanks,
>> > Ivan Cores.
>> > _______________________________________________
>> > users mailing list
>> > users_at_[hidden]
>> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>> _______________________________________________
>> users mailing list
>> users_at_[hidden]
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> Özgür Pekçağlıyan
> Computer Engineer (M.Sc.)
>

-- 
Özgür Pekçağlıyan
Computer Engineer (M.Sc.)