Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_COMM_DUP freeze with OpenMPI 1.4.1
From: francoise.roch_at_[hidden]
Date: 2011-05-26 04:43:46


Jeff Squyres wrote:
> On May 24, 2011, at 4:42 AM, francoise.roch_at_[hidden] wrote:
>
>
>>> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
>>> IF ( id%PAR .eq. 0 ) THEN
>>> IF ( id%MYID .eq. MASTER ) THEN
>>> color = MPI_UNDEFINED
>>> ELSE
>>> color = 0
>>> END IF
>>> CALL MPI_COMM_SPLIT( id%COMM, color, 0, id%COMM_NODES, IERR )
>>> id%NSLAVES = id%NPROCS - 1
>>> ELSE
>>> CALL MPI_COMM_DUP( id%COMM, id%COMM_NODES, IERR )
>>> id%NSLAVES = id%NPROCS
>>> END IF
>>>
>>> IF (id%PAR .ne. 0 .or. id%MYID .NE. MASTER) THEN
>>> CALL MPI_COMM_DUP( id%COMM_NODES, id%COMM_LOAD, IERR
>>> ENDIF
>>>
>> Actually, we look at the first case, that is id%par = 0. But the MPI_COMM_SPLIT routine is called by all the processes and creates a new communicator named "id%COMM_NODES". This communicator contains all the slaves, but not the master. The first MPI_COMM_DUP is not executed, the second one is executed on all the slaves nodes (id%MYID .NE. MASTER ), because the communicator is "id%COMM_NODES" and so implies all the processes of this communicator.
>>
>
> Hmm.
>
> Are you sure that id%myid is relative to id%comm? I don't see its assignment in your code snipit.
>
>
Yes, id%myid is relative to id%comm. It is assigned, just before in the
code, by all the processes, by the following call :
CALL MPI_COMM_RANK(id%COMM, id%MYID, IERR)