Open MPI logo

Open MPI User's Mailing List Archives

  |   Home   |   Support   |   FAQ   |   all Open MPI User's mailing list

Subject: Re: [OMPI users] MPI_COMM_DUP freeze with OpenMPI 1.4.1
From: Jeff Squyres (jsquyres_at_[hidden])
Date: 2011-05-26 06:12:34


On May 26, 2011, at 4:43 AM, francoise.roch_at_[hidden] wrote:

>>>> CALL MPI_COMM_SIZE(id%COMM, id%NPROCS, IERR )
>>>> IF ( id%PAR .eq. 0 ) THEN
>>>> IF ( id%MYID .eq. MASTER ) THEN
>>>> color = MPI_UNDEFINED
>>>> ELSE
>>>> color = 0
>>>> END IF
>>>> CALL MPI_COMM_SPLIT( id%COMM, color, 0, id%COMM_NODES, IERR )
>>>> id%NSLAVES = id%NPROCS - 1
>>>> ELSE
>>>> CALL MPI_COMM_DUP( id%COMM, id%COMM_NODES, IERR )
>>>> id%NSLAVES = id%NPROCS
>>>> END IF
>>>>
>>>> IF (id%PAR .ne. 0 .or. id%MYID .NE. MASTER) THEN
>>>> CALL MPI_COMM_DUP( id%COMM_NODES, id%COMM_LOAD, IERR
>>>> ENDIF
>>>>
> Yes, id%myid is relative to id%comm. It is assigned, just before in the code, by all the processes, by the following call :
> CALL MPI_COMM_RANK(id%COMM, id%MYID, IERR)

I'm out of ideas. :-(

Can you create a short reproducer code?

-- 
Jeff Squyres
jsquyres_at_[hidden]
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/