Hmmm...that is a bit of a problem. I've added a note to see if we can turn down the aggressiveness of the MPI layer once we hit finalize, but that won't solve your immediate problem.
Our usual suggestion is that you have each proc call finalize before going on to do other things. This avoids the situation you are describing - after all, if the MPI phase is done, there really isn't any reason to not call MPI_Finalize before moving on to other things. You don't have to delay the call until the end of the program.
On May 23, 2014, at 1:45 AM, Iván Cores González <ivan.coresg_at_[hidden]> wrote:
> Hi all,
> I have a performance problem with the next code.
> int main( int argc, char *argv )
> MPI_Init(&argc, &argv);
> int myid;
> MPI_Comm_rank(MPI_COMM_WORLD, &myid);
> //Imagine some important job here, but P0 ends first.
> if (myid != 0)
> printf("Calling MPI_Finalize() ...\n");
> // Process 0 maintain core load at 100%.
> return 0;
> If some MPI threads call MPI_Finalize() while others threads are still
> "working", the MPI_Finalize() function maintains the core load in 100%
> and not allows other threads or jobs in the processor to run faster.
> Any idea to avoid the load or force the P0 to sleep?
> Ivan Cores.
> users mailing list